00:00:00.000 Started by upstream project "autotest-per-patch" build number 127130 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.079 Using shallow fetch with depth 1 00:00:00.079 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.079 > git --version # timeout=10 00:00:00.100 > git --version # 'git version 2.39.2' 00:00:00.100 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.717 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.726 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.737 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.737 > git config core.sparsecheckout # timeout=10 00:00:05.746 > git read-tree -mu HEAD # timeout=10 00:00:05.761 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.789 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.789 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.869 [Pipeline] Start of Pipeline 00:00:05.880 [Pipeline] library 00:00:05.881 Loading library shm_lib@master 00:00:05.881 Library shm_lib@master is cached. Copying from home. 00:00:05.895 [Pipeline] node 00:00:05.904 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.905 [Pipeline] { 00:00:05.914 [Pipeline] catchError 00:00:05.915 [Pipeline] { 00:00:05.927 [Pipeline] wrap 00:00:05.938 [Pipeline] { 00:00:05.944 [Pipeline] stage 00:00:05.945 [Pipeline] { (Prologue) 00:00:06.106 [Pipeline] sh 00:00:06.384 + logger -p user.info -t JENKINS-CI 00:00:06.400 [Pipeline] echo 00:00:06.401 Node: GP11 00:00:06.410 [Pipeline] sh 00:00:06.705 [Pipeline] setCustomBuildProperty 00:00:06.713 [Pipeline] echo 00:00:06.714 Cleanup processes 00:00:06.718 [Pipeline] sh 00:00:06.991 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.991 2262673 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.001 [Pipeline] sh 00:00:07.275 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.275 ++ grep -v 'sudo pgrep' 00:00:07.275 ++ awk '{print $1}' 00:00:07.275 + sudo kill -9 00:00:07.276 + true 00:00:07.287 [Pipeline] cleanWs 00:00:07.295 [WS-CLEANUP] Deleting project workspace... 00:00:07.295 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.301 [WS-CLEANUP] done 00:00:07.304 [Pipeline] setCustomBuildProperty 00:00:07.313 [Pipeline] sh 00:00:07.588 + sudo git config --global --replace-all safe.directory '*' 00:00:07.669 [Pipeline] httpRequest 00:00:07.704 [Pipeline] echo 00:00:07.706 Sorcerer 10.211.164.101 is alive 00:00:07.715 [Pipeline] httpRequest 00:00:07.719 HttpMethod: GET 00:00:07.720 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.720 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.731 Response Code: HTTP/1.1 200 OK 00:00:07.731 Success: Status code 200 is in the accepted range: 200,404 00:00:07.732 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.361 [Pipeline] sh 00:00:10.644 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.660 [Pipeline] httpRequest 00:00:10.677 [Pipeline] echo 00:00:10.679 Sorcerer 10.211.164.101 is alive 00:00:10.688 [Pipeline] httpRequest 00:00:10.692 HttpMethod: GET 00:00:10.693 URL: http://10.211.164.101/packages/spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:00:10.693 Sending request to url: http://10.211.164.101/packages/spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:00:10.709 Response Code: HTTP/1.1 200 OK 00:00:10.710 Success: Status code 200 is in the accepted range: 200,404 00:00:10.710 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:03:04.871 [Pipeline] sh 00:03:05.154 + tar --no-same-owner -xf spdk_e5ef9abc9ee9c86a9ff61108fb262630413e40ec.tar.gz 00:03:07.696 [Pipeline] sh 00:03:07.978 + git -C spdk log --oneline -n5 00:03:07.978 e5ef9abc9 test/scheduler: Add a system level test for the scheduler_set_option RPC 00:03:07.978 223450b47 lib/event: Add support for core isolation in scheduling 00:03:07.978 6a0934c18 lib/event: Modify spdk_reactor_set_interrupt_mode() to be called from scheduling reactor 00:03:07.978 d005e023b raid: fix empty slot not updated in sb after resize 00:03:07.978 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:03:07.989 [Pipeline] } 00:03:08.004 [Pipeline] // stage 00:03:08.012 [Pipeline] stage 00:03:08.013 [Pipeline] { (Prepare) 00:03:08.029 [Pipeline] writeFile 00:03:08.045 [Pipeline] sh 00:03:08.325 + logger -p user.info -t JENKINS-CI 00:03:08.337 [Pipeline] sh 00:03:08.618 + logger -p user.info -t JENKINS-CI 00:03:08.630 [Pipeline] sh 00:03:08.911 + cat autorun-spdk.conf 00:03:08.911 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:08.911 SPDK_TEST_NVMF=1 00:03:08.911 SPDK_TEST_NVME_CLI=1 00:03:08.911 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:08.911 SPDK_TEST_NVMF_NICS=e810 00:03:08.911 SPDK_TEST_VFIOUSER=1 00:03:08.911 SPDK_RUN_UBSAN=1 00:03:08.911 NET_TYPE=phy 00:03:08.919 RUN_NIGHTLY=0 00:03:08.923 [Pipeline] readFile 00:03:08.943 [Pipeline] withEnv 00:03:08.945 [Pipeline] { 00:03:08.957 [Pipeline] sh 00:03:09.285 + set -ex 00:03:09.285 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:09.285 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:09.285 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:09.285 ++ SPDK_TEST_NVMF=1 00:03:09.285 ++ SPDK_TEST_NVME_CLI=1 00:03:09.285 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:09.285 ++ SPDK_TEST_NVMF_NICS=e810 00:03:09.285 ++ SPDK_TEST_VFIOUSER=1 00:03:09.285 ++ SPDK_RUN_UBSAN=1 00:03:09.285 ++ NET_TYPE=phy 00:03:09.285 ++ RUN_NIGHTLY=0 00:03:09.285 + case $SPDK_TEST_NVMF_NICS in 00:03:09.285 + DRIVERS=ice 00:03:09.285 + [[ tcp == \r\d\m\a ]] 00:03:09.285 + [[ -n ice ]] 00:03:09.285 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:09.285 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:09.285 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:09.285 rmmod: ERROR: Module irdma is not currently loaded 00:03:09.285 rmmod: ERROR: Module i40iw is not currently loaded 00:03:09.285 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:09.285 + true 00:03:09.285 + for D in $DRIVERS 00:03:09.285 + sudo modprobe ice 00:03:09.285 + exit 0 00:03:09.294 [Pipeline] } 00:03:09.307 [Pipeline] // withEnv 00:03:09.312 [Pipeline] } 00:03:09.323 [Pipeline] // stage 00:03:09.333 [Pipeline] catchError 00:03:09.335 [Pipeline] { 00:03:09.350 [Pipeline] timeout 00:03:09.350 Timeout set to expire in 50 min 00:03:09.352 [Pipeline] { 00:03:09.366 [Pipeline] stage 00:03:09.368 [Pipeline] { (Tests) 00:03:09.383 [Pipeline] sh 00:03:09.664 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:09.664 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:09.664 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:09.664 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:09.664 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.664 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:09.664 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:09.664 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:09.664 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:09.664 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:09.664 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:09.664 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:09.664 + source /etc/os-release 00:03:09.664 ++ NAME='Fedora Linux' 00:03:09.664 ++ VERSION='38 (Cloud Edition)' 00:03:09.664 ++ ID=fedora 00:03:09.664 ++ VERSION_ID=38 00:03:09.664 ++ VERSION_CODENAME= 00:03:09.664 ++ PLATFORM_ID=platform:f38 00:03:09.665 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:09.665 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:09.665 ++ LOGO=fedora-logo-icon 00:03:09.665 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:09.665 ++ HOME_URL=https://fedoraproject.org/ 00:03:09.665 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:09.665 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:09.665 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:09.665 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:09.665 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:09.665 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:09.665 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:09.665 ++ SUPPORT_END=2024-05-14 00:03:09.665 ++ VARIANT='Cloud Edition' 00:03:09.665 ++ VARIANT_ID=cloud 00:03:09.665 + uname -a 00:03:09.665 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:09.665 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:10.665 Hugepages 00:03:10.665 node hugesize free / total 00:03:10.665 node0 1048576kB 0 / 0 00:03:10.665 node0 2048kB 0 / 0 00:03:10.665 node1 1048576kB 0 / 0 00:03:10.665 node1 2048kB 0 / 0 00:03:10.665 00:03:10.665 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.665 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:10.665 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:10.665 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:10.665 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:10.665 + rm -f /tmp/spdk-ld-path 00:03:10.665 + source autorun-spdk.conf 00:03:10.665 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:10.665 ++ SPDK_TEST_NVMF=1 00:03:10.665 ++ SPDK_TEST_NVME_CLI=1 00:03:10.665 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:10.665 ++ SPDK_TEST_NVMF_NICS=e810 00:03:10.665 ++ SPDK_TEST_VFIOUSER=1 00:03:10.665 ++ SPDK_RUN_UBSAN=1 00:03:10.665 ++ NET_TYPE=phy 00:03:10.665 ++ RUN_NIGHTLY=0 00:03:10.665 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:10.665 + [[ -n '' ]] 00:03:10.665 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.665 + for M in /var/spdk/build-*-manifest.txt 00:03:10.665 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:10.665 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:10.665 + for M in /var/spdk/build-*-manifest.txt 00:03:10.665 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:10.665 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:10.665 ++ uname 00:03:10.665 + [[ Linux == \L\i\n\u\x ]] 00:03:10.665 + sudo dmesg -T 00:03:10.665 + sudo dmesg --clear 00:03:10.665 + dmesg_pid=2263989 00:03:10.665 + [[ Fedora Linux == FreeBSD ]] 00:03:10.665 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:10.665 + sudo dmesg -Tw 00:03:10.665 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:10.665 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:10.665 + [[ -x /usr/src/fio-static/fio ]] 00:03:10.665 + export FIO_BIN=/usr/src/fio-static/fio 00:03:10.665 + FIO_BIN=/usr/src/fio-static/fio 00:03:10.665 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:10.665 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:10.665 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:10.665 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:10.665 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:10.665 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:10.665 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:10.665 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:10.665 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.665 Test configuration: 00:03:10.665 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:10.665 SPDK_TEST_NVMF=1 00:03:10.665 SPDK_TEST_NVME_CLI=1 00:03:10.665 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:10.665 SPDK_TEST_NVMF_NICS=e810 00:03:10.665 SPDK_TEST_VFIOUSER=1 00:03:10.665 SPDK_RUN_UBSAN=1 00:03:10.665 NET_TYPE=phy 00:03:10.924 RUN_NIGHTLY=0 07:08:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.924 07:08:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:10.924 07:08:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.924 07:08:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.924 07:08:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.924 07:08:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.924 07:08:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.924 07:08:43 -- paths/export.sh@5 -- $ export PATH 00:03:10.924 07:08:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.924 07:08:43 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:10.924 07:08:43 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:10.924 07:08:43 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721884123.XXXXXX 00:03:10.924 07:08:43 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721884123.v1n25J 00:03:10.924 07:08:43 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:10.924 07:08:43 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:10.924 07:08:43 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:10.924 07:08:43 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:10.924 07:08:43 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:10.924 07:08:43 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:10.924 07:08:43 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:10.924 07:08:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.924 07:08:43 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:10.924 07:08:43 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:10.924 07:08:43 -- pm/common@17 -- $ local monitor 00:03:10.924 07:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.924 07:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.924 07:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.924 07:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.924 07:08:43 -- pm/common@21 -- $ date +%s 00:03:10.924 07:08:43 -- pm/common@25 -- $ sleep 1 00:03:10.924 07:08:43 -- pm/common@21 -- $ date +%s 00:03:10.924 07:08:43 -- pm/common@21 -- $ date +%s 00:03:10.924 07:08:43 -- pm/common@21 -- $ date +%s 00:03:10.924 07:08:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884123 00:03:10.924 07:08:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884123 00:03:10.924 07:08:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884123 00:03:10.924 07:08:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884123 00:03:10.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884123_collect-vmstat.pm.log 00:03:10.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884123_collect-cpu-load.pm.log 00:03:10.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884123_collect-cpu-temp.pm.log 00:03:10.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884123_collect-bmc-pm.bmc.pm.log 00:03:11.858 07:08:44 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:11.858 07:08:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:11.858 07:08:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:11.858 07:08:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.858 07:08:44 -- spdk/autobuild.sh@16 -- $ date -u 00:03:11.858 Thu Jul 25 05:08:44 AM UTC 2024 00:03:11.858 07:08:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:11.858 v24.09-pre-321-ge5ef9abc9 00:03:11.858 07:08:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:11.858 07:08:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:11.858 07:08:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:11.858 07:08:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:11.858 07:08:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:11.858 07:08:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.858 ************************************ 00:03:11.858 START TEST ubsan 00:03:11.858 ************************************ 00:03:11.858 07:08:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:11.858 using ubsan 00:03:11.858 00:03:11.858 real 0m0.000s 00:03:11.858 user 0m0.000s 00:03:11.858 sys 0m0.000s 00:03:11.858 07:08:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:11.858 07:08:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:11.858 ************************************ 00:03:11.858 END TEST ubsan 00:03:11.858 ************************************ 00:03:11.858 07:08:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:11.858 07:08:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.858 07:08:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.858 07:08:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.858 07:08:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.858 07:08:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.858 07:08:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.858 07:08:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.858 07:08:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:12.116 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:12.116 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:12.374 Using 'verbs' RDMA provider 00:03:22.907 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:32.880 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:32.880 Creating mk/config.mk...done. 00:03:32.880 Creating mk/cc.flags.mk...done. 00:03:32.880 Type 'make' to build. 00:03:32.880 07:09:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:32.880 07:09:04 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:32.880 07:09:04 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:32.880 07:09:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.880 ************************************ 00:03:32.880 START TEST make 00:03:32.880 ************************************ 00:03:32.880 07:09:04 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:32.880 make[1]: Nothing to be done for 'all'. 00:03:33.821 The Meson build system 00:03:33.821 Version: 1.3.1 00:03:33.821 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:33.821 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:33.821 Build type: native build 00:03:33.821 Project name: libvfio-user 00:03:33.821 Project version: 0.0.1 00:03:33.821 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:33.821 C linker for the host machine: cc ld.bfd 2.39-16 00:03:33.821 Host machine cpu family: x86_64 00:03:33.821 Host machine cpu: x86_64 00:03:33.821 Run-time dependency threads found: YES 00:03:33.821 Library dl found: YES 00:03:33.821 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:33.821 Run-time dependency json-c found: YES 0.17 00:03:33.821 Run-time dependency cmocka found: YES 1.1.7 00:03:33.821 Program pytest-3 found: NO 00:03:33.821 Program flake8 found: NO 00:03:33.821 Program misspell-fixer found: NO 00:03:33.821 Program restructuredtext-lint found: NO 00:03:33.821 Program valgrind found: YES (/usr/bin/valgrind) 00:03:33.821 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:33.821 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:33.821 Compiler for C supports arguments -Wwrite-strings: YES 00:03:33.821 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:33.821 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:33.821 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:33.821 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:33.821 Build targets in project: 8 00:03:33.821 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:33.821 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:33.821 00:03:33.821 libvfio-user 0.0.1 00:03:33.821 00:03:33.821 User defined options 00:03:33.821 buildtype : debug 00:03:33.821 default_library: shared 00:03:33.821 libdir : /usr/local/lib 00:03:33.821 00:03:33.821 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.395 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:34.700 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:34.700 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:34.700 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:34.700 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:34.700 [5/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:34.700 [6/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:34.700 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:34.700 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:34.700 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:34.700 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:34.700 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:34.700 [12/37] Compiling C object samples/null.p/null.c.o 00:03:34.967 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:34.967 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:34.967 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:34.967 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:34.967 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:34.967 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:34.967 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:34.967 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:34.967 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:34.967 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:34.967 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:34.967 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:34.967 [25/37] Compiling C object samples/server.p/server.c.o 00:03:34.967 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:34.967 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:03:34.967 [28/37] Compiling C object samples/client.p/client.c.o 00:03:34.967 [29/37] Linking target samples/client 00:03:35.225 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:35.225 [31/37] Linking target test/unit_tests 00:03:35.225 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:35.225 [33/37] Linking target samples/null 00:03:35.225 [34/37] Linking target samples/server 00:03:35.225 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:35.225 [36/37] Linking target samples/lspci 00:03:35.225 [37/37] Linking target samples/gpio-pci-idio-16 00:03:35.225 INFO: autodetecting backend as ninja 00:03:35.225 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:35.484 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:36.098 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:36.098 ninja: no work to do. 00:03:41.370 The Meson build system 00:03:41.370 Version: 1.3.1 00:03:41.370 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:41.370 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:41.370 Build type: native build 00:03:41.370 Program cat found: YES (/usr/bin/cat) 00:03:41.370 Project name: DPDK 00:03:41.370 Project version: 24.03.0 00:03:41.370 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:41.370 C linker for the host machine: cc ld.bfd 2.39-16 00:03:41.370 Host machine cpu family: x86_64 00:03:41.370 Host machine cpu: x86_64 00:03:41.370 Message: ## Building in Developer Mode ## 00:03:41.370 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:41.370 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:41.370 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:41.370 Program python3 found: YES (/usr/bin/python3) 00:03:41.370 Program cat found: YES (/usr/bin/cat) 00:03:41.370 Compiler for C supports arguments -march=native: YES 00:03:41.370 Checking for size of "void *" : 8 00:03:41.370 Checking for size of "void *" : 8 (cached) 00:03:41.370 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:41.370 Library m found: YES 00:03:41.370 Library numa found: YES 00:03:41.370 Has header "numaif.h" : YES 00:03:41.370 Library fdt found: NO 00:03:41.370 Library execinfo found: NO 00:03:41.370 Has header "execinfo.h" : YES 00:03:41.370 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:41.370 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:41.370 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:41.370 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:41.370 Run-time dependency openssl found: YES 3.0.9 00:03:41.370 Run-time dependency libpcap found: YES 1.10.4 00:03:41.370 Has header "pcap.h" with dependency libpcap: YES 00:03:41.370 Compiler for C supports arguments -Wcast-qual: YES 00:03:41.370 Compiler for C supports arguments -Wdeprecated: YES 00:03:41.370 Compiler for C supports arguments -Wformat: YES 00:03:41.370 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:41.370 Compiler for C supports arguments -Wformat-security: NO 00:03:41.370 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:41.370 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:41.370 Compiler for C supports arguments -Wnested-externs: YES 00:03:41.370 Compiler for C supports arguments -Wold-style-definition: YES 00:03:41.370 Compiler for C supports arguments -Wpointer-arith: YES 00:03:41.370 Compiler for C supports arguments -Wsign-compare: YES 00:03:41.370 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:41.370 Compiler for C supports arguments -Wundef: YES 00:03:41.370 Compiler for C supports arguments -Wwrite-strings: YES 00:03:41.370 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:41.370 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:41.370 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:41.370 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:41.370 Program objdump found: YES (/usr/bin/objdump) 00:03:41.370 Compiler for C supports arguments -mavx512f: YES 00:03:41.370 Checking if "AVX512 checking" compiles: YES 00:03:41.370 Fetching value of define "__SSE4_2__" : 1 00:03:41.370 Fetching value of define "__AES__" : 1 00:03:41.370 Fetching value of define "__AVX__" : 1 00:03:41.370 Fetching value of define "__AVX2__" : (undefined) 00:03:41.370 Fetching value of define "__AVX512BW__" : (undefined) 00:03:41.370 Fetching value of define "__AVX512CD__" : (undefined) 00:03:41.370 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:41.370 Fetching value of define "__AVX512F__" : (undefined) 00:03:41.370 Fetching value of define "__AVX512VL__" : (undefined) 00:03:41.370 Fetching value of define "__PCLMUL__" : 1 00:03:41.370 Fetching value of define "__RDRND__" : 1 00:03:41.370 Fetching value of define "__RDSEED__" : (undefined) 00:03:41.370 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:41.370 Fetching value of define "__znver1__" : (undefined) 00:03:41.370 Fetching value of define "__znver2__" : (undefined) 00:03:41.370 Fetching value of define "__znver3__" : (undefined) 00:03:41.370 Fetching value of define "__znver4__" : (undefined) 00:03:41.371 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:41.371 Message: lib/log: Defining dependency "log" 00:03:41.371 Message: lib/kvargs: Defining dependency "kvargs" 00:03:41.371 Message: lib/telemetry: Defining dependency "telemetry" 00:03:41.371 Checking for function "getentropy" : NO 00:03:41.371 Message: lib/eal: Defining dependency "eal" 00:03:41.371 Message: lib/ring: Defining dependency "ring" 00:03:41.371 Message: lib/rcu: Defining dependency "rcu" 00:03:41.371 Message: lib/mempool: Defining dependency "mempool" 00:03:41.371 Message: lib/mbuf: Defining dependency "mbuf" 00:03:41.371 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:41.371 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:41.371 Compiler for C supports arguments -mpclmul: YES 00:03:41.371 Compiler for C supports arguments -maes: YES 00:03:41.371 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:41.371 Compiler for C supports arguments -mavx512bw: YES 00:03:41.371 Compiler for C supports arguments -mavx512dq: YES 00:03:41.371 Compiler for C supports arguments -mavx512vl: YES 00:03:41.371 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:41.371 Compiler for C supports arguments -mavx2: YES 00:03:41.371 Compiler for C supports arguments -mavx: YES 00:03:41.371 Message: lib/net: Defining dependency "net" 00:03:41.371 Message: lib/meter: Defining dependency "meter" 00:03:41.371 Message: lib/ethdev: Defining dependency "ethdev" 00:03:41.371 Message: lib/pci: Defining dependency "pci" 00:03:41.371 Message: lib/cmdline: Defining dependency "cmdline" 00:03:41.371 Message: lib/hash: Defining dependency "hash" 00:03:41.371 Message: lib/timer: Defining dependency "timer" 00:03:41.371 Message: lib/compressdev: Defining dependency "compressdev" 00:03:41.371 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:41.371 Message: lib/dmadev: Defining dependency "dmadev" 00:03:41.371 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:41.371 Message: lib/power: Defining dependency "power" 00:03:41.371 Message: lib/reorder: Defining dependency "reorder" 00:03:41.371 Message: lib/security: Defining dependency "security" 00:03:41.371 Has header "linux/userfaultfd.h" : YES 00:03:41.371 Has header "linux/vduse.h" : YES 00:03:41.371 Message: lib/vhost: Defining dependency "vhost" 00:03:41.371 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:41.371 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:41.371 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:41.371 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:41.371 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:41.371 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:41.371 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:41.371 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:41.371 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:41.371 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:41.371 Program doxygen found: YES (/usr/bin/doxygen) 00:03:41.371 Configuring doxy-api-html.conf using configuration 00:03:41.371 Configuring doxy-api-man.conf using configuration 00:03:41.371 Program mandb found: YES (/usr/bin/mandb) 00:03:41.371 Program sphinx-build found: NO 00:03:41.371 Configuring rte_build_config.h using configuration 00:03:41.371 Message: 00:03:41.371 ================= 00:03:41.371 Applications Enabled 00:03:41.371 ================= 00:03:41.371 00:03:41.371 apps: 00:03:41.371 00:03:41.371 00:03:41.371 Message: 00:03:41.371 ================= 00:03:41.371 Libraries Enabled 00:03:41.371 ================= 00:03:41.371 00:03:41.371 libs: 00:03:41.371 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:41.371 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:41.371 cryptodev, dmadev, power, reorder, security, vhost, 00:03:41.371 00:03:41.371 Message: 00:03:41.371 =============== 00:03:41.371 Drivers Enabled 00:03:41.371 =============== 00:03:41.371 00:03:41.371 common: 00:03:41.371 00:03:41.371 bus: 00:03:41.371 pci, vdev, 00:03:41.371 mempool: 00:03:41.371 ring, 00:03:41.371 dma: 00:03:41.371 00:03:41.371 net: 00:03:41.371 00:03:41.371 crypto: 00:03:41.371 00:03:41.371 compress: 00:03:41.371 00:03:41.371 vdpa: 00:03:41.371 00:03:41.371 00:03:41.371 Message: 00:03:41.371 ================= 00:03:41.371 Content Skipped 00:03:41.371 ================= 00:03:41.371 00:03:41.371 apps: 00:03:41.371 dumpcap: explicitly disabled via build config 00:03:41.371 graph: explicitly disabled via build config 00:03:41.371 pdump: explicitly disabled via build config 00:03:41.371 proc-info: explicitly disabled via build config 00:03:41.371 test-acl: explicitly disabled via build config 00:03:41.371 test-bbdev: explicitly disabled via build config 00:03:41.371 test-cmdline: explicitly disabled via build config 00:03:41.371 test-compress-perf: explicitly disabled via build config 00:03:41.371 test-crypto-perf: explicitly disabled via build config 00:03:41.371 test-dma-perf: explicitly disabled via build config 00:03:41.371 test-eventdev: explicitly disabled via build config 00:03:41.371 test-fib: explicitly disabled via build config 00:03:41.371 test-flow-perf: explicitly disabled via build config 00:03:41.371 test-gpudev: explicitly disabled via build config 00:03:41.371 test-mldev: explicitly disabled via build config 00:03:41.371 test-pipeline: explicitly disabled via build config 00:03:41.371 test-pmd: explicitly disabled via build config 00:03:41.371 test-regex: explicitly disabled via build config 00:03:41.371 test-sad: explicitly disabled via build config 00:03:41.371 test-security-perf: explicitly disabled via build config 00:03:41.371 00:03:41.371 libs: 00:03:41.371 argparse: explicitly disabled via build config 00:03:41.371 metrics: explicitly disabled via build config 00:03:41.371 acl: explicitly disabled via build config 00:03:41.371 bbdev: explicitly disabled via build config 00:03:41.371 bitratestats: explicitly disabled via build config 00:03:41.371 bpf: explicitly disabled via build config 00:03:41.371 cfgfile: explicitly disabled via build config 00:03:41.371 distributor: explicitly disabled via build config 00:03:41.371 efd: explicitly disabled via build config 00:03:41.371 eventdev: explicitly disabled via build config 00:03:41.371 dispatcher: explicitly disabled via build config 00:03:41.371 gpudev: explicitly disabled via build config 00:03:41.371 gro: explicitly disabled via build config 00:03:41.371 gso: explicitly disabled via build config 00:03:41.371 ip_frag: explicitly disabled via build config 00:03:41.371 jobstats: explicitly disabled via build config 00:03:41.371 latencystats: explicitly disabled via build config 00:03:41.371 lpm: explicitly disabled via build config 00:03:41.371 member: explicitly disabled via build config 00:03:41.371 pcapng: explicitly disabled via build config 00:03:41.371 rawdev: explicitly disabled via build config 00:03:41.371 regexdev: explicitly disabled via build config 00:03:41.371 mldev: explicitly disabled via build config 00:03:41.371 rib: explicitly disabled via build config 00:03:41.371 sched: explicitly disabled via build config 00:03:41.371 stack: explicitly disabled via build config 00:03:41.371 ipsec: explicitly disabled via build config 00:03:41.371 pdcp: explicitly disabled via build config 00:03:41.371 fib: explicitly disabled via build config 00:03:41.371 port: explicitly disabled via build config 00:03:41.371 pdump: explicitly disabled via build config 00:03:41.371 table: explicitly disabled via build config 00:03:41.371 pipeline: explicitly disabled via build config 00:03:41.371 graph: explicitly disabled via build config 00:03:41.371 node: explicitly disabled via build config 00:03:41.371 00:03:41.371 drivers: 00:03:41.371 common/cpt: not in enabled drivers build config 00:03:41.371 common/dpaax: not in enabled drivers build config 00:03:41.371 common/iavf: not in enabled drivers build config 00:03:41.371 common/idpf: not in enabled drivers build config 00:03:41.371 common/ionic: not in enabled drivers build config 00:03:41.371 common/mvep: not in enabled drivers build config 00:03:41.371 common/octeontx: not in enabled drivers build config 00:03:41.371 bus/auxiliary: not in enabled drivers build config 00:03:41.371 bus/cdx: not in enabled drivers build config 00:03:41.371 bus/dpaa: not in enabled drivers build config 00:03:41.371 bus/fslmc: not in enabled drivers build config 00:03:41.371 bus/ifpga: not in enabled drivers build config 00:03:41.371 bus/platform: not in enabled drivers build config 00:03:41.371 bus/uacce: not in enabled drivers build config 00:03:41.371 bus/vmbus: not in enabled drivers build config 00:03:41.371 common/cnxk: not in enabled drivers build config 00:03:41.371 common/mlx5: not in enabled drivers build config 00:03:41.371 common/nfp: not in enabled drivers build config 00:03:41.371 common/nitrox: not in enabled drivers build config 00:03:41.371 common/qat: not in enabled drivers build config 00:03:41.371 common/sfc_efx: not in enabled drivers build config 00:03:41.371 mempool/bucket: not in enabled drivers build config 00:03:41.371 mempool/cnxk: not in enabled drivers build config 00:03:41.371 mempool/dpaa: not in enabled drivers build config 00:03:41.371 mempool/dpaa2: not in enabled drivers build config 00:03:41.371 mempool/octeontx: not in enabled drivers build config 00:03:41.371 mempool/stack: not in enabled drivers build config 00:03:41.371 dma/cnxk: not in enabled drivers build config 00:03:41.371 dma/dpaa: not in enabled drivers build config 00:03:41.371 dma/dpaa2: not in enabled drivers build config 00:03:41.371 dma/hisilicon: not in enabled drivers build config 00:03:41.371 dma/idxd: not in enabled drivers build config 00:03:41.371 dma/ioat: not in enabled drivers build config 00:03:41.372 dma/skeleton: not in enabled drivers build config 00:03:41.372 net/af_packet: not in enabled drivers build config 00:03:41.372 net/af_xdp: not in enabled drivers build config 00:03:41.372 net/ark: not in enabled drivers build config 00:03:41.372 net/atlantic: not in enabled drivers build config 00:03:41.372 net/avp: not in enabled drivers build config 00:03:41.372 net/axgbe: not in enabled drivers build config 00:03:41.372 net/bnx2x: not in enabled drivers build config 00:03:41.372 net/bnxt: not in enabled drivers build config 00:03:41.372 net/bonding: not in enabled drivers build config 00:03:41.372 net/cnxk: not in enabled drivers build config 00:03:41.372 net/cpfl: not in enabled drivers build config 00:03:41.372 net/cxgbe: not in enabled drivers build config 00:03:41.372 net/dpaa: not in enabled drivers build config 00:03:41.372 net/dpaa2: not in enabled drivers build config 00:03:41.372 net/e1000: not in enabled drivers build config 00:03:41.372 net/ena: not in enabled drivers build config 00:03:41.372 net/enetc: not in enabled drivers build config 00:03:41.372 net/enetfec: not in enabled drivers build config 00:03:41.372 net/enic: not in enabled drivers build config 00:03:41.372 net/failsafe: not in enabled drivers build config 00:03:41.372 net/fm10k: not in enabled drivers build config 00:03:41.372 net/gve: not in enabled drivers build config 00:03:41.372 net/hinic: not in enabled drivers build config 00:03:41.372 net/hns3: not in enabled drivers build config 00:03:41.372 net/i40e: not in enabled drivers build config 00:03:41.372 net/iavf: not in enabled drivers build config 00:03:41.372 net/ice: not in enabled drivers build config 00:03:41.372 net/idpf: not in enabled drivers build config 00:03:41.372 net/igc: not in enabled drivers build config 00:03:41.372 net/ionic: not in enabled drivers build config 00:03:41.372 net/ipn3ke: not in enabled drivers build config 00:03:41.372 net/ixgbe: not in enabled drivers build config 00:03:41.372 net/mana: not in enabled drivers build config 00:03:41.372 net/memif: not in enabled drivers build config 00:03:41.372 net/mlx4: not in enabled drivers build config 00:03:41.372 net/mlx5: not in enabled drivers build config 00:03:41.372 net/mvneta: not in enabled drivers build config 00:03:41.372 net/mvpp2: not in enabled drivers build config 00:03:41.372 net/netvsc: not in enabled drivers build config 00:03:41.372 net/nfb: not in enabled drivers build config 00:03:41.372 net/nfp: not in enabled drivers build config 00:03:41.372 net/ngbe: not in enabled drivers build config 00:03:41.372 net/null: not in enabled drivers build config 00:03:41.372 net/octeontx: not in enabled drivers build config 00:03:41.372 net/octeon_ep: not in enabled drivers build config 00:03:41.372 net/pcap: not in enabled drivers build config 00:03:41.372 net/pfe: not in enabled drivers build config 00:03:41.372 net/qede: not in enabled drivers build config 00:03:41.372 net/ring: not in enabled drivers build config 00:03:41.372 net/sfc: not in enabled drivers build config 00:03:41.372 net/softnic: not in enabled drivers build config 00:03:41.372 net/tap: not in enabled drivers build config 00:03:41.372 net/thunderx: not in enabled drivers build config 00:03:41.372 net/txgbe: not in enabled drivers build config 00:03:41.372 net/vdev_netvsc: not in enabled drivers build config 00:03:41.372 net/vhost: not in enabled drivers build config 00:03:41.372 net/virtio: not in enabled drivers build config 00:03:41.372 net/vmxnet3: not in enabled drivers build config 00:03:41.372 raw/*: missing internal dependency, "rawdev" 00:03:41.372 crypto/armv8: not in enabled drivers build config 00:03:41.372 crypto/bcmfs: not in enabled drivers build config 00:03:41.372 crypto/caam_jr: not in enabled drivers build config 00:03:41.372 crypto/ccp: not in enabled drivers build config 00:03:41.372 crypto/cnxk: not in enabled drivers build config 00:03:41.372 crypto/dpaa_sec: not in enabled drivers build config 00:03:41.372 crypto/dpaa2_sec: not in enabled drivers build config 00:03:41.372 crypto/ipsec_mb: not in enabled drivers build config 00:03:41.372 crypto/mlx5: not in enabled drivers build config 00:03:41.372 crypto/mvsam: not in enabled drivers build config 00:03:41.372 crypto/nitrox: not in enabled drivers build config 00:03:41.372 crypto/null: not in enabled drivers build config 00:03:41.372 crypto/octeontx: not in enabled drivers build config 00:03:41.372 crypto/openssl: not in enabled drivers build config 00:03:41.372 crypto/scheduler: not in enabled drivers build config 00:03:41.372 crypto/uadk: not in enabled drivers build config 00:03:41.372 crypto/virtio: not in enabled drivers build config 00:03:41.372 compress/isal: not in enabled drivers build config 00:03:41.372 compress/mlx5: not in enabled drivers build config 00:03:41.372 compress/nitrox: not in enabled drivers build config 00:03:41.372 compress/octeontx: not in enabled drivers build config 00:03:41.372 compress/zlib: not in enabled drivers build config 00:03:41.372 regex/*: missing internal dependency, "regexdev" 00:03:41.372 ml/*: missing internal dependency, "mldev" 00:03:41.372 vdpa/ifc: not in enabled drivers build config 00:03:41.372 vdpa/mlx5: not in enabled drivers build config 00:03:41.372 vdpa/nfp: not in enabled drivers build config 00:03:41.372 vdpa/sfc: not in enabled drivers build config 00:03:41.372 event/*: missing internal dependency, "eventdev" 00:03:41.372 baseband/*: missing internal dependency, "bbdev" 00:03:41.372 gpu/*: missing internal dependency, "gpudev" 00:03:41.372 00:03:41.372 00:03:41.372 Build targets in project: 85 00:03:41.372 00:03:41.372 DPDK 24.03.0 00:03:41.372 00:03:41.372 User defined options 00:03:41.372 buildtype : debug 00:03:41.372 default_library : shared 00:03:41.372 libdir : lib 00:03:41.372 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:41.372 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:41.372 c_link_args : 00:03:41.372 cpu_instruction_set: native 00:03:41.372 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:41.372 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:41.372 enable_docs : false 00:03:41.372 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:41.372 enable_kmods : false 00:03:41.372 max_lcores : 128 00:03:41.372 tests : false 00:03:41.372 00:03:41.372 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:41.372 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:41.633 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:41.633 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:41.633 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:41.633 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:41.633 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:41.633 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:41.633 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:41.633 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:41.633 [9/268] Linking static target lib/librte_kvargs.a 00:03:41.633 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:41.633 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:41.633 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:41.633 [13/268] Linking static target lib/librte_log.a 00:03:41.633 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:41.633 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:41.633 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:42.204 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.462 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:42.463 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:42.463 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:42.463 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:42.463 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:42.463 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:42.463 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:42.463 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:42.463 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:42.463 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:42.463 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:42.463 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:42.463 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:42.463 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:42.463 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:42.463 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:42.463 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:42.463 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:42.463 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:42.463 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:42.463 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:42.463 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:42.463 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:42.463 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:42.463 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:42.463 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:42.463 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:42.463 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:42.463 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:42.463 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:42.463 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:42.463 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:42.463 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.463 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:42.463 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:42.463 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:42.724 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:42.724 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:42.724 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:42.724 [57/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.724 [58/268] Linking static target lib/librte_telemetry.a 00:03:42.724 [59/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.724 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:42.724 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:42.724 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:42.724 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:42.724 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:42.724 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:42.724 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:42.724 [67/268] Linking target lib/librte_log.so.24.1 00:03:42.988 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:42.988 [69/268] Linking static target lib/librte_pci.a 00:03:42.988 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:43.246 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:43.246 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:43.246 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:43.246 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:43.246 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:43.246 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:43.246 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:43.246 [78/268] Linking target lib/librte_kvargs.so.24.1 00:03:43.246 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:43.246 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:43.246 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:43.507 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:43.507 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:43.507 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:43.507 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:43.507 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:43.507 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:43.507 [88/268] Linking static target lib/librte_meter.a 00:03:43.507 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:43.507 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:43.507 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:43.507 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:43.507 [93/268] Linking static target lib/librte_ring.a 00:03:43.507 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:43.507 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:43.507 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:43.507 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:43.507 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:43.507 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:43.507 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:43.507 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:43.507 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.507 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:43.507 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:43.507 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:43.507 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:43.507 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:43.507 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:43.507 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:43.768 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:43.769 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:43.769 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:43.769 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:43.769 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:43.769 [115/268] Linking static target lib/librte_eal.a 00:03:43.769 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:43.769 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:43.769 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:43.769 [119/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.769 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:43.769 [121/268] Linking static target lib/librte_mempool.a 00:03:43.769 [122/268] Linking static target lib/librte_rcu.a 00:03:43.769 [123/268] Linking target lib/librte_telemetry.so.24.1 00:03:43.769 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:43.769 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:43.769 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:43.769 [127/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:43.769 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:44.028 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:44.028 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:44.028 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:44.028 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:44.028 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.028 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:44.028 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.291 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:44.291 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:44.291 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:44.291 [139/268] Linking static target lib/librte_net.a 00:03:44.291 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:44.291 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:44.291 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:44.291 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:44.291 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:44.291 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:44.291 [146/268] Linking static target lib/librte_cmdline.a 00:03:44.291 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:44.553 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.553 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:44.553 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:44.553 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:44.553 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:44.553 [153/268] Linking static target lib/librte_timer.a 00:03:44.553 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:44.553 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:44.553 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:44.553 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:44.814 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:44.814 [159/268] Linking static target lib/librte_dmadev.a 00:03:44.814 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:44.814 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:44.814 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.814 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:44.814 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:44.814 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:44.814 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:44.814 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:44.814 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:44.814 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.814 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:44.814 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:44.814 [172/268] Linking static target lib/librte_compressdev.a 00:03:45.072 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.072 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:45.072 [175/268] Linking static target lib/librte_power.a 00:03:45.072 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:45.072 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:45.072 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:45.072 [179/268] Linking static target lib/librte_hash.a 00:03:45.072 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:45.072 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:45.072 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:45.072 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:45.072 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:45.072 [185/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:45.072 [186/268] Linking static target lib/librte_mbuf.a 00:03:45.072 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:45.072 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.329 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:45.329 [190/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.329 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:45.329 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:45.329 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:45.329 [194/268] Linking static target lib/librte_reorder.a 00:03:45.329 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:45.329 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:45.329 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:45.329 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:45.329 [199/268] Linking static target drivers/librte_bus_vdev.a 00:03:45.329 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:45.329 [201/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:45.329 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:45.329 [203/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:45.329 [204/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.587 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.587 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:45.587 [207/268] Linking static target lib/librte_security.a 00:03:45.587 [208/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.587 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:45.587 [210/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.587 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:45.587 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:45.587 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.587 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:45.587 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:45.587 [216/268] Linking static target drivers/librte_mempool_ring.a 00:03:45.587 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:45.587 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:45.587 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:45.587 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.587 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:45.587 [222/268] Linking static target lib/librte_ethdev.a 00:03:45.844 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:45.844 [224/268] Linking static target lib/librte_cryptodev.a 00:03:45.844 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.103 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.036 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.408 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:49.782 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.071 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.071 [231/268] Linking target lib/librte_eal.so.24.1 00:03:50.071 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:50.331 [233/268] Linking target lib/librte_pci.so.24.1 00:03:50.331 [234/268] Linking target lib/librte_ring.so.24.1 00:03:50.331 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:50.331 [236/268] Linking target lib/librte_meter.so.24.1 00:03:50.331 [237/268] Linking target lib/librte_timer.so.24.1 00:03:50.331 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:50.331 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:50.331 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:50.331 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:50.331 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:50.331 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:50.331 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:50.331 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:50.331 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:50.589 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:50.589 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:50.589 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:50.589 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:50.589 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:50.589 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:50.589 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:50.589 [254/268] Linking target lib/librte_net.so.24.1 00:03:50.589 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:50.847 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:50.847 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:50.847 [258/268] Linking target lib/librte_hash.so.24.1 00:03:50.847 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:50.847 [260/268] Linking target lib/librte_security.so.24.1 00:03:50.847 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:51.104 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:51.104 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:51.104 [264/268] Linking target lib/librte_power.so.24.1 00:03:53.630 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:53.630 [266/268] Linking static target lib/librte_vhost.a 00:03:54.563 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.563 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:54.822 INFO: autodetecting backend as ninja 00:03:54.822 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:55.756 CC lib/ut_mock/mock.o 00:03:55.756 CC lib/log/log.o 00:03:55.756 CC lib/log/log_flags.o 00:03:55.756 CC lib/log/log_deprecated.o 00:03:55.756 CC lib/ut/ut.o 00:03:55.756 LIB libspdk_log.a 00:03:55.756 LIB libspdk_ut.a 00:03:55.756 LIB libspdk_ut_mock.a 00:03:55.756 SO libspdk_log.so.7.0 00:03:55.756 SO libspdk_ut.so.2.0 00:03:55.756 SO libspdk_ut_mock.so.6.0 00:03:55.756 SYMLINK libspdk_ut.so 00:03:55.756 SYMLINK libspdk_ut_mock.so 00:03:55.756 SYMLINK libspdk_log.so 00:03:56.015 CXX lib/trace_parser/trace.o 00:03:56.015 CC lib/dma/dma.o 00:03:56.015 CC lib/util/base64.o 00:03:56.015 CC lib/util/bit_array.o 00:03:56.015 CC lib/ioat/ioat.o 00:03:56.015 CC lib/util/cpuset.o 00:03:56.015 CC lib/util/crc16.o 00:03:56.015 CC lib/util/crc32.o 00:03:56.015 CC lib/util/crc32c.o 00:03:56.015 CC lib/util/crc32_ieee.o 00:03:56.015 CC lib/util/crc64.o 00:03:56.015 CC lib/util/dif.o 00:03:56.015 CC lib/util/fd.o 00:03:56.015 CC lib/util/fd_group.o 00:03:56.015 CC lib/util/file.o 00:03:56.015 CC lib/util/hexlify.o 00:03:56.015 CC lib/util/iov.o 00:03:56.015 CC lib/util/math.o 00:03:56.015 CC lib/util/net.o 00:03:56.015 CC lib/util/pipe.o 00:03:56.015 CC lib/util/strerror_tls.o 00:03:56.015 CC lib/util/string.o 00:03:56.015 CC lib/util/uuid.o 00:03:56.015 CC lib/util/zipf.o 00:03:56.015 CC lib/util/xor.o 00:03:56.015 CC lib/vfio_user/host/vfio_user_pci.o 00:03:56.015 CC lib/vfio_user/host/vfio_user.o 00:03:56.273 LIB libspdk_dma.a 00:03:56.273 SO libspdk_dma.so.4.0 00:03:56.273 SYMLINK libspdk_dma.so 00:03:56.531 LIB libspdk_ioat.a 00:03:56.531 LIB libspdk_vfio_user.a 00:03:56.531 SO libspdk_ioat.so.7.0 00:03:56.531 SO libspdk_vfio_user.so.5.0 00:03:56.531 SYMLINK libspdk_ioat.so 00:03:56.531 SYMLINK libspdk_vfio_user.so 00:03:56.531 LIB libspdk_util.a 00:03:56.531 SO libspdk_util.so.10.0 00:03:56.789 SYMLINK libspdk_util.so 00:03:57.047 CC lib/rdma_utils/rdma_utils.o 00:03:57.047 CC lib/idxd/idxd.o 00:03:57.047 CC lib/json/json_parse.o 00:03:57.047 CC lib/idxd/idxd_user.o 00:03:57.047 CC lib/json/json_util.o 00:03:57.047 CC lib/idxd/idxd_kernel.o 00:03:57.047 CC lib/rdma_provider/common.o 00:03:57.047 CC lib/conf/conf.o 00:03:57.047 CC lib/json/json_write.o 00:03:57.047 CC lib/vmd/vmd.o 00:03:57.047 CC lib/env_dpdk/env.o 00:03:57.047 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:57.047 CC lib/env_dpdk/memory.o 00:03:57.047 CC lib/vmd/led.o 00:03:57.047 CC lib/env_dpdk/pci.o 00:03:57.047 CC lib/env_dpdk/init.o 00:03:57.047 CC lib/env_dpdk/threads.o 00:03:57.047 CC lib/env_dpdk/pci_ioat.o 00:03:57.047 CC lib/env_dpdk/pci_virtio.o 00:03:57.047 CC lib/env_dpdk/pci_vmd.o 00:03:57.047 CC lib/env_dpdk/pci_idxd.o 00:03:57.047 CC lib/env_dpdk/pci_event.o 00:03:57.047 CC lib/env_dpdk/sigbus_handler.o 00:03:57.047 CC lib/env_dpdk/pci_dpdk.o 00:03:57.047 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:57.047 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:57.047 LIB libspdk_trace_parser.a 00:03:57.047 SO libspdk_trace_parser.so.5.0 00:03:57.305 LIB libspdk_rdma_provider.a 00:03:57.305 SYMLINK libspdk_trace_parser.so 00:03:57.305 SO libspdk_rdma_provider.so.6.0 00:03:57.305 LIB libspdk_conf.a 00:03:57.305 SO libspdk_conf.so.6.0 00:03:57.305 LIB libspdk_rdma_utils.a 00:03:57.305 SYMLINK libspdk_rdma_provider.so 00:03:57.305 SO libspdk_rdma_utils.so.1.0 00:03:57.305 LIB libspdk_json.a 00:03:57.305 SYMLINK libspdk_conf.so 00:03:57.305 SO libspdk_json.so.6.0 00:03:57.305 SYMLINK libspdk_rdma_utils.so 00:03:57.305 SYMLINK libspdk_json.so 00:03:57.563 CC lib/jsonrpc/jsonrpc_server.o 00:03:57.563 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:57.563 CC lib/jsonrpc/jsonrpc_client.o 00:03:57.563 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:57.563 LIB libspdk_idxd.a 00:03:57.563 SO libspdk_idxd.so.12.0 00:03:57.563 SYMLINK libspdk_idxd.so 00:03:57.563 LIB libspdk_vmd.a 00:03:57.563 SO libspdk_vmd.so.6.0 00:03:57.821 SYMLINK libspdk_vmd.so 00:03:57.821 LIB libspdk_jsonrpc.a 00:03:57.821 SO libspdk_jsonrpc.so.6.0 00:03:57.821 SYMLINK libspdk_jsonrpc.so 00:03:58.079 CC lib/rpc/rpc.o 00:03:58.337 LIB libspdk_rpc.a 00:03:58.337 SO libspdk_rpc.so.6.0 00:03:58.337 SYMLINK libspdk_rpc.so 00:03:58.595 CC lib/notify/notify.o 00:03:58.595 CC lib/notify/notify_rpc.o 00:03:58.595 CC lib/trace/trace.o 00:03:58.595 CC lib/keyring/keyring.o 00:03:58.595 CC lib/trace/trace_flags.o 00:03:58.595 CC lib/keyring/keyring_rpc.o 00:03:58.595 CC lib/trace/trace_rpc.o 00:03:58.595 LIB libspdk_notify.a 00:03:58.595 SO libspdk_notify.so.6.0 00:03:58.853 SYMLINK libspdk_notify.so 00:03:58.853 LIB libspdk_keyring.a 00:03:58.853 LIB libspdk_trace.a 00:03:58.853 SO libspdk_keyring.so.1.0 00:03:58.853 SO libspdk_trace.so.10.0 00:03:58.853 SYMLINK libspdk_keyring.so 00:03:58.853 SYMLINK libspdk_trace.so 00:03:59.111 CC lib/thread/thread.o 00:03:59.111 CC lib/thread/iobuf.o 00:03:59.111 CC lib/sock/sock.o 00:03:59.111 CC lib/sock/sock_rpc.o 00:03:59.111 LIB libspdk_env_dpdk.a 00:03:59.111 SO libspdk_env_dpdk.so.15.0 00:03:59.369 SYMLINK libspdk_env_dpdk.so 00:03:59.369 LIB libspdk_sock.a 00:03:59.369 SO libspdk_sock.so.10.0 00:03:59.369 SYMLINK libspdk_sock.so 00:03:59.628 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:59.628 CC lib/nvme/nvme_ctrlr.o 00:03:59.628 CC lib/nvme/nvme_fabric.o 00:03:59.628 CC lib/nvme/nvme_ns_cmd.o 00:03:59.628 CC lib/nvme/nvme_ns.o 00:03:59.628 CC lib/nvme/nvme_pcie_common.o 00:03:59.628 CC lib/nvme/nvme_pcie.o 00:03:59.628 CC lib/nvme/nvme_qpair.o 00:03:59.628 CC lib/nvme/nvme.o 00:03:59.628 CC lib/nvme/nvme_quirks.o 00:03:59.628 CC lib/nvme/nvme_transport.o 00:03:59.628 CC lib/nvme/nvme_discovery.o 00:03:59.628 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:59.628 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:59.628 CC lib/nvme/nvme_tcp.o 00:03:59.628 CC lib/nvme/nvme_opal.o 00:03:59.628 CC lib/nvme/nvme_io_msg.o 00:03:59.628 CC lib/nvme/nvme_poll_group.o 00:03:59.628 CC lib/nvme/nvme_zns.o 00:03:59.628 CC lib/nvme/nvme_stubs.o 00:03:59.628 CC lib/nvme/nvme_auth.o 00:03:59.628 CC lib/nvme/nvme_cuse.o 00:03:59.628 CC lib/nvme/nvme_vfio_user.o 00:03:59.628 CC lib/nvme/nvme_rdma.o 00:04:00.561 LIB libspdk_thread.a 00:04:00.561 SO libspdk_thread.so.10.1 00:04:00.562 SYMLINK libspdk_thread.so 00:04:00.820 CC lib/virtio/virtio.o 00:04:00.820 CC lib/blob/blobstore.o 00:04:00.820 CC lib/vfu_tgt/tgt_endpoint.o 00:04:00.820 CC lib/virtio/virtio_vhost_user.o 00:04:00.820 CC lib/accel/accel.o 00:04:00.820 CC lib/init/json_config.o 00:04:00.820 CC lib/vfu_tgt/tgt_rpc.o 00:04:00.820 CC lib/blob/request.o 00:04:00.820 CC lib/virtio/virtio_vfio_user.o 00:04:00.820 CC lib/accel/accel_rpc.o 00:04:00.820 CC lib/init/subsystem.o 00:04:00.820 CC lib/virtio/virtio_pci.o 00:04:00.820 CC lib/blob/zeroes.o 00:04:00.820 CC lib/accel/accel_sw.o 00:04:00.820 CC lib/init/subsystem_rpc.o 00:04:00.820 CC lib/blob/blob_bs_dev.o 00:04:00.820 CC lib/init/rpc.o 00:04:01.078 LIB libspdk_init.a 00:04:01.078 SO libspdk_init.so.5.0 00:04:01.078 LIB libspdk_vfu_tgt.a 00:04:01.078 LIB libspdk_virtio.a 00:04:01.078 SYMLINK libspdk_init.so 00:04:01.384 SO libspdk_vfu_tgt.so.3.0 00:04:01.384 SO libspdk_virtio.so.7.0 00:04:01.384 SYMLINK libspdk_vfu_tgt.so 00:04:01.384 SYMLINK libspdk_virtio.so 00:04:01.384 CC lib/event/app.o 00:04:01.384 CC lib/event/reactor.o 00:04:01.384 CC lib/event/log_rpc.o 00:04:01.384 CC lib/event/app_rpc.o 00:04:01.384 CC lib/event/scheduler_static.o 00:04:01.642 LIB libspdk_event.a 00:04:01.900 SO libspdk_event.so.14.0 00:04:01.900 SYMLINK libspdk_event.so 00:04:01.900 LIB libspdk_accel.a 00:04:01.900 SO libspdk_accel.so.16.0 00:04:01.900 SYMLINK libspdk_accel.so 00:04:01.900 LIB libspdk_nvme.a 00:04:02.158 CC lib/bdev/bdev.o 00:04:02.158 CC lib/bdev/bdev_rpc.o 00:04:02.158 CC lib/bdev/bdev_zone.o 00:04:02.158 CC lib/bdev/part.o 00:04:02.158 CC lib/bdev/scsi_nvme.o 00:04:02.158 SO libspdk_nvme.so.13.1 00:04:02.416 SYMLINK libspdk_nvme.so 00:04:03.791 LIB libspdk_blob.a 00:04:03.791 SO libspdk_blob.so.11.0 00:04:04.049 SYMLINK libspdk_blob.so 00:04:04.049 CC lib/lvol/lvol.o 00:04:04.049 CC lib/blobfs/blobfs.o 00:04:04.049 CC lib/blobfs/tree.o 00:04:04.612 LIB libspdk_bdev.a 00:04:04.612 SO libspdk_bdev.so.16.0 00:04:04.885 SYMLINK libspdk_bdev.so 00:04:04.885 LIB libspdk_blobfs.a 00:04:04.885 CC lib/ublk/ublk.o 00:04:04.885 CC lib/scsi/dev.o 00:04:04.885 CC lib/ublk/ublk_rpc.o 00:04:04.885 CC lib/ftl/ftl_core.o 00:04:04.885 CC lib/nvmf/ctrlr.o 00:04:04.885 CC lib/nbd/nbd.o 00:04:04.885 CC lib/scsi/lun.o 00:04:04.885 CC lib/ftl/ftl_init.o 00:04:04.885 CC lib/nvmf/ctrlr_discovery.o 00:04:04.885 CC lib/nbd/nbd_rpc.o 00:04:04.885 CC lib/scsi/port.o 00:04:04.885 CC lib/nvmf/ctrlr_bdev.o 00:04:04.885 CC lib/ftl/ftl_layout.o 00:04:04.885 CC lib/ftl/ftl_debug.o 00:04:04.885 CC lib/scsi/scsi.o 00:04:04.885 CC lib/nvmf/subsystem.o 00:04:04.885 CC lib/nvmf/nvmf.o 00:04:04.885 CC lib/ftl/ftl_io.o 00:04:04.885 CC lib/scsi/scsi_bdev.o 00:04:04.885 CC lib/ftl/ftl_sb.o 00:04:04.885 CC lib/scsi/scsi_pr.o 00:04:04.885 CC lib/nvmf/nvmf_rpc.o 00:04:04.885 CC lib/nvmf/transport.o 00:04:04.885 CC lib/scsi/scsi_rpc.o 00:04:04.885 CC lib/ftl/ftl_l2p.o 00:04:04.885 CC lib/scsi/task.o 00:04:04.885 CC lib/nvmf/tcp.o 00:04:04.885 CC lib/ftl/ftl_l2p_flat.o 00:04:04.885 CC lib/nvmf/stubs.o 00:04:04.885 CC lib/ftl/ftl_nv_cache.o 00:04:04.885 CC lib/ftl/ftl_band.o 00:04:04.885 CC lib/ftl/ftl_band_ops.o 00:04:04.885 CC lib/nvmf/mdns_server.o 00:04:04.885 CC lib/nvmf/vfio_user.o 00:04:04.885 CC lib/ftl/ftl_writer.o 00:04:04.885 CC lib/nvmf/rdma.o 00:04:04.885 CC lib/ftl/ftl_rq.o 00:04:04.885 CC lib/nvmf/auth.o 00:04:04.885 CC lib/ftl/ftl_reloc.o 00:04:04.885 CC lib/ftl/ftl_l2p_cache.o 00:04:04.885 CC lib/ftl/ftl_p2l.o 00:04:04.885 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.885 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.885 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:04.885 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:04.885 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:04.885 SO libspdk_blobfs.so.10.0 00:04:04.885 LIB libspdk_lvol.a 00:04:05.143 SO libspdk_lvol.so.10.0 00:04:05.143 SYMLINK libspdk_blobfs.so 00:04:05.143 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.143 SYMLINK libspdk_lvol.so 00:04:05.143 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.402 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.402 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.402 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:05.402 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:05.402 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:05.402 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:05.402 CC lib/ftl/utils/ftl_conf.o 00:04:05.402 CC lib/ftl/utils/ftl_md.o 00:04:05.402 CC lib/ftl/utils/ftl_mempool.o 00:04:05.402 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.402 CC lib/ftl/utils/ftl_property.o 00:04:05.402 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.402 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.402 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.402 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.402 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:05.402 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.402 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:05.402 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.662 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.662 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.662 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.662 CC lib/ftl/base/ftl_base_dev.o 00:04:05.662 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.662 CC lib/ftl/ftl_trace.o 00:04:05.662 LIB libspdk_nbd.a 00:04:05.662 SO libspdk_nbd.so.7.0 00:04:05.920 SYMLINK libspdk_nbd.so 00:04:05.920 LIB libspdk_scsi.a 00:04:05.920 SO libspdk_scsi.so.9.0 00:04:05.920 LIB libspdk_ublk.a 00:04:05.920 SYMLINK libspdk_scsi.so 00:04:05.920 SO libspdk_ublk.so.3.0 00:04:06.178 SYMLINK libspdk_ublk.so 00:04:06.178 CC lib/vhost/vhost.o 00:04:06.178 CC lib/iscsi/conn.o 00:04:06.178 CC lib/vhost/vhost_rpc.o 00:04:06.178 CC lib/iscsi/init_grp.o 00:04:06.178 CC lib/iscsi/iscsi.o 00:04:06.178 CC lib/iscsi/md5.o 00:04:06.178 CC lib/vhost/vhost_scsi.o 00:04:06.178 CC lib/vhost/vhost_blk.o 00:04:06.178 CC lib/iscsi/param.o 00:04:06.179 CC lib/iscsi/portal_grp.o 00:04:06.179 CC lib/vhost/rte_vhost_user.o 00:04:06.179 CC lib/iscsi/tgt_node.o 00:04:06.179 CC lib/iscsi/iscsi_subsystem.o 00:04:06.179 CC lib/iscsi/iscsi_rpc.o 00:04:06.179 CC lib/iscsi/task.o 00:04:06.437 LIB libspdk_ftl.a 00:04:06.695 SO libspdk_ftl.so.9.0 00:04:06.953 SYMLINK libspdk_ftl.so 00:04:07.519 LIB libspdk_vhost.a 00:04:07.519 SO libspdk_vhost.so.8.0 00:04:07.519 LIB libspdk_nvmf.a 00:04:07.519 SYMLINK libspdk_vhost.so 00:04:07.519 SO libspdk_nvmf.so.19.0 00:04:07.519 LIB libspdk_iscsi.a 00:04:07.519 SO libspdk_iscsi.so.8.0 00:04:07.777 SYMLINK libspdk_nvmf.so 00:04:07.777 SYMLINK libspdk_iscsi.so 00:04:08.035 CC module/env_dpdk/env_dpdk_rpc.o 00:04:08.035 CC module/vfu_device/vfu_virtio.o 00:04:08.035 CC module/vfu_device/vfu_virtio_blk.o 00:04:08.035 CC module/vfu_device/vfu_virtio_scsi.o 00:04:08.035 CC module/vfu_device/vfu_virtio_rpc.o 00:04:08.035 CC module/accel/error/accel_error.o 00:04:08.035 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.035 CC module/sock/posix/posix.o 00:04:08.035 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:08.035 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:08.035 CC module/accel/error/accel_error_rpc.o 00:04:08.035 CC module/keyring/file/keyring.o 00:04:08.035 CC module/keyring/file/keyring_rpc.o 00:04:08.035 CC module/blob/bdev/blob_bdev.o 00:04:08.035 CC module/keyring/linux/keyring.o 00:04:08.035 CC module/accel/dsa/accel_dsa.o 00:04:08.035 CC module/keyring/linux/keyring_rpc.o 00:04:08.035 CC module/accel/iaa/accel_iaa.o 00:04:08.035 CC module/accel/dsa/accel_dsa_rpc.o 00:04:08.035 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.035 CC module/accel/ioat/accel_ioat.o 00:04:08.035 CC module/accel/ioat/accel_ioat_rpc.o 00:04:08.293 LIB libspdk_env_dpdk_rpc.a 00:04:08.293 SO libspdk_env_dpdk_rpc.so.6.0 00:04:08.293 SYMLINK libspdk_env_dpdk_rpc.so 00:04:08.293 LIB libspdk_keyring_file.a 00:04:08.293 LIB libspdk_keyring_linux.a 00:04:08.293 LIB libspdk_scheduler_gscheduler.a 00:04:08.293 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.293 SO libspdk_keyring_file.so.1.0 00:04:08.293 SO libspdk_keyring_linux.so.1.0 00:04:08.293 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.293 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.293 LIB libspdk_accel_error.a 00:04:08.293 LIB libspdk_accel_ioat.a 00:04:08.293 LIB libspdk_scheduler_dynamic.a 00:04:08.293 SO libspdk_accel_error.so.2.0 00:04:08.293 LIB libspdk_accel_iaa.a 00:04:08.293 SO libspdk_accel_ioat.so.6.0 00:04:08.293 SYMLINK libspdk_keyring_file.so 00:04:08.293 SYMLINK libspdk_keyring_linux.so 00:04:08.293 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.293 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.293 SO libspdk_scheduler_dynamic.so.4.0 00:04:08.293 SO libspdk_accel_iaa.so.3.0 00:04:08.551 SYMLINK libspdk_accel_error.so 00:04:08.551 LIB libspdk_accel_dsa.a 00:04:08.551 SYMLINK libspdk_scheduler_dynamic.so 00:04:08.551 LIB libspdk_blob_bdev.a 00:04:08.551 SYMLINK libspdk_accel_ioat.so 00:04:08.551 SO libspdk_accel_dsa.so.5.0 00:04:08.551 SYMLINK libspdk_accel_iaa.so 00:04:08.551 SO libspdk_blob_bdev.so.11.0 00:04:08.551 SYMLINK libspdk_blob_bdev.so 00:04:08.551 SYMLINK libspdk_accel_dsa.so 00:04:08.809 LIB libspdk_vfu_device.a 00:04:08.809 SO libspdk_vfu_device.so.3.0 00:04:08.809 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.809 CC module/bdev/null/bdev_null.o 00:04:08.809 CC module/bdev/gpt/gpt.o 00:04:08.809 CC module/bdev/nvme/bdev_nvme.o 00:04:08.809 CC module/bdev/error/vbdev_error.o 00:04:08.809 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.809 CC module/bdev/malloc/bdev_malloc.o 00:04:08.809 CC module/bdev/null/bdev_null_rpc.o 00:04:08.809 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.809 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.809 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.809 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.809 CC module/bdev/lvol/vbdev_lvol.o 00:04:08.809 CC module/bdev/nvme/nvme_rpc.o 00:04:08.809 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.809 CC module/bdev/delay/vbdev_delay.o 00:04:08.809 CC module/bdev/raid/bdev_raid.o 00:04:08.809 CC module/bdev/nvme/vbdev_opal.o 00:04:08.809 CC module/bdev/split/vbdev_split.o 00:04:08.809 CC module/bdev/iscsi/bdev_iscsi.o 00:04:08.809 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:08.809 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.809 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.809 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.809 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:08.809 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:08.809 CC module/bdev/ftl/bdev_ftl.o 00:04:08.809 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.809 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.809 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.809 CC module/bdev/raid/raid0.o 00:04:08.809 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.809 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:08.809 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:08.809 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.809 CC module/bdev/raid/raid1.o 00:04:08.809 CC module/bdev/raid/concat.o 00:04:08.809 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:08.809 CC module/bdev/aio/bdev_aio.o 00:04:08.809 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.809 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.809 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.809 SYMLINK libspdk_vfu_device.so 00:04:09.067 LIB libspdk_sock_posix.a 00:04:09.067 SO libspdk_sock_posix.so.6.0 00:04:09.067 LIB libspdk_blobfs_bdev.a 00:04:09.067 SYMLINK libspdk_sock_posix.so 00:04:09.067 LIB libspdk_bdev_split.a 00:04:09.325 SO libspdk_blobfs_bdev.so.6.0 00:04:09.325 SO libspdk_bdev_split.so.6.0 00:04:09.325 LIB libspdk_bdev_malloc.a 00:04:09.325 SO libspdk_bdev_malloc.so.6.0 00:04:09.325 LIB libspdk_bdev_null.a 00:04:09.325 SYMLINK libspdk_blobfs_bdev.so 00:04:09.325 LIB libspdk_bdev_error.a 00:04:09.325 SYMLINK libspdk_bdev_split.so 00:04:09.325 LIB libspdk_bdev_gpt.a 00:04:09.325 SO libspdk_bdev_null.so.6.0 00:04:09.325 SO libspdk_bdev_error.so.6.0 00:04:09.325 LIB libspdk_bdev_passthru.a 00:04:09.325 LIB libspdk_bdev_ftl.a 00:04:09.325 SYMLINK libspdk_bdev_malloc.so 00:04:09.325 SO libspdk_bdev_gpt.so.6.0 00:04:09.325 SO libspdk_bdev_passthru.so.6.0 00:04:09.325 LIB libspdk_bdev_aio.a 00:04:09.325 SO libspdk_bdev_ftl.so.6.0 00:04:09.325 SYMLINK libspdk_bdev_null.so 00:04:09.325 SYMLINK libspdk_bdev_error.so 00:04:09.325 LIB libspdk_bdev_iscsi.a 00:04:09.325 SO libspdk_bdev_aio.so.6.0 00:04:09.325 LIB libspdk_bdev_zone_block.a 00:04:09.325 SYMLINK libspdk_bdev_gpt.so 00:04:09.325 LIB libspdk_bdev_delay.a 00:04:09.325 SYMLINK libspdk_bdev_passthru.so 00:04:09.325 SO libspdk_bdev_iscsi.so.6.0 00:04:09.325 SYMLINK libspdk_bdev_ftl.so 00:04:09.325 SO libspdk_bdev_zone_block.so.6.0 00:04:09.325 LIB libspdk_bdev_virtio.a 00:04:09.325 SO libspdk_bdev_delay.so.6.0 00:04:09.325 SYMLINK libspdk_bdev_aio.so 00:04:09.325 SYMLINK libspdk_bdev_iscsi.so 00:04:09.325 SO libspdk_bdev_virtio.so.6.0 00:04:09.583 SYMLINK libspdk_bdev_zone_block.so 00:04:09.583 SYMLINK libspdk_bdev_delay.so 00:04:09.583 LIB libspdk_bdev_lvol.a 00:04:09.583 SO libspdk_bdev_lvol.so.6.0 00:04:09.583 SYMLINK libspdk_bdev_virtio.so 00:04:09.583 SYMLINK libspdk_bdev_lvol.so 00:04:09.842 LIB libspdk_bdev_raid.a 00:04:09.842 SO libspdk_bdev_raid.so.6.0 00:04:09.842 SYMLINK libspdk_bdev_raid.so 00:04:11.741 LIB libspdk_bdev_nvme.a 00:04:11.741 SO libspdk_bdev_nvme.so.7.0 00:04:11.741 SYMLINK libspdk_bdev_nvme.so 00:04:11.999 CC module/event/subsystems/vmd/vmd.o 00:04:11.999 CC module/event/subsystems/iobuf/iobuf.o 00:04:11.999 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.999 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:11.999 CC module/event/subsystems/sock/sock.o 00:04:11.999 CC module/event/subsystems/scheduler/scheduler.o 00:04:11.999 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:11.999 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:11.999 CC module/event/subsystems/keyring/keyring.o 00:04:11.999 LIB libspdk_event_keyring.a 00:04:11.999 LIB libspdk_event_vhost_blk.a 00:04:11.999 LIB libspdk_event_vfu_tgt.a 00:04:11.999 LIB libspdk_event_scheduler.a 00:04:11.999 LIB libspdk_event_vmd.a 00:04:11.999 LIB libspdk_event_sock.a 00:04:12.257 LIB libspdk_event_iobuf.a 00:04:12.257 SO libspdk_event_keyring.so.1.0 00:04:12.257 SO libspdk_event_vhost_blk.so.3.0 00:04:12.257 SO libspdk_event_vfu_tgt.so.3.0 00:04:12.257 SO libspdk_event_scheduler.so.4.0 00:04:12.257 SO libspdk_event_sock.so.5.0 00:04:12.257 SO libspdk_event_vmd.so.6.0 00:04:12.257 SO libspdk_event_iobuf.so.3.0 00:04:12.257 SYMLINK libspdk_event_keyring.so 00:04:12.257 SYMLINK libspdk_event_vhost_blk.so 00:04:12.257 SYMLINK libspdk_event_vfu_tgt.so 00:04:12.257 SYMLINK libspdk_event_sock.so 00:04:12.257 SYMLINK libspdk_event_scheduler.so 00:04:12.257 SYMLINK libspdk_event_vmd.so 00:04:12.257 SYMLINK libspdk_event_iobuf.so 00:04:12.257 CC module/event/subsystems/accel/accel.o 00:04:12.515 LIB libspdk_event_accel.a 00:04:12.515 SO libspdk_event_accel.so.6.0 00:04:12.515 SYMLINK libspdk_event_accel.so 00:04:12.773 CC module/event/subsystems/bdev/bdev.o 00:04:13.031 LIB libspdk_event_bdev.a 00:04:13.031 SO libspdk_event_bdev.so.6.0 00:04:13.031 SYMLINK libspdk_event_bdev.so 00:04:13.289 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:13.289 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.289 CC module/event/subsystems/scsi/scsi.o 00:04:13.289 CC module/event/subsystems/nbd/nbd.o 00:04:13.289 CC module/event/subsystems/ublk/ublk.o 00:04:13.289 LIB libspdk_event_nbd.a 00:04:13.289 LIB libspdk_event_ublk.a 00:04:13.289 LIB libspdk_event_scsi.a 00:04:13.289 SO libspdk_event_nbd.so.6.0 00:04:13.289 SO libspdk_event_ublk.so.3.0 00:04:13.289 SO libspdk_event_scsi.so.6.0 00:04:13.289 SYMLINK libspdk_event_nbd.so 00:04:13.289 SYMLINK libspdk_event_ublk.so 00:04:13.548 SYMLINK libspdk_event_scsi.so 00:04:13.548 LIB libspdk_event_nvmf.a 00:04:13.548 SO libspdk_event_nvmf.so.6.0 00:04:13.548 SYMLINK libspdk_event_nvmf.so 00:04:13.548 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.548 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.806 LIB libspdk_event_vhost_scsi.a 00:04:13.806 LIB libspdk_event_iscsi.a 00:04:13.806 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.806 SO libspdk_event_iscsi.so.6.0 00:04:13.806 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.806 SYMLINK libspdk_event_iscsi.so 00:04:13.806 SO libspdk.so.6.0 00:04:13.806 SYMLINK libspdk.so 00:04:14.070 CC app/spdk_lspci/spdk_lspci.o 00:04:14.070 TEST_HEADER include/spdk/accel.h 00:04:14.070 TEST_HEADER include/spdk/accel_module.h 00:04:14.070 CC app/spdk_top/spdk_top.o 00:04:14.070 CXX app/trace/trace.o 00:04:14.070 TEST_HEADER include/spdk/assert.h 00:04:14.070 TEST_HEADER include/spdk/barrier.h 00:04:14.070 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.070 TEST_HEADER include/spdk/bdev.h 00:04:14.070 TEST_HEADER include/spdk/base64.h 00:04:14.070 CC test/rpc_client/rpc_client_test.o 00:04:14.070 TEST_HEADER include/spdk/bdev_module.h 00:04:14.070 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.070 CC app/trace_record/trace_record.o 00:04:14.070 TEST_HEADER include/spdk/bit_array.h 00:04:14.070 CC app/spdk_nvme_identify/identify.o 00:04:14.070 CC app/spdk_nvme_perf/perf.o 00:04:14.070 TEST_HEADER include/spdk/bit_pool.h 00:04:14.070 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.070 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.070 TEST_HEADER include/spdk/blobfs.h 00:04:14.070 TEST_HEADER include/spdk/blob.h 00:04:14.070 TEST_HEADER include/spdk/conf.h 00:04:14.070 TEST_HEADER include/spdk/config.h 00:04:14.070 TEST_HEADER include/spdk/cpuset.h 00:04:14.070 TEST_HEADER include/spdk/crc16.h 00:04:14.070 TEST_HEADER include/spdk/crc32.h 00:04:14.070 TEST_HEADER include/spdk/crc64.h 00:04:14.070 TEST_HEADER include/spdk/dif.h 00:04:14.070 TEST_HEADER include/spdk/dma.h 00:04:14.070 TEST_HEADER include/spdk/endian.h 00:04:14.070 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.070 TEST_HEADER include/spdk/env.h 00:04:14.070 TEST_HEADER include/spdk/event.h 00:04:14.070 TEST_HEADER include/spdk/fd_group.h 00:04:14.070 TEST_HEADER include/spdk/fd.h 00:04:14.070 TEST_HEADER include/spdk/file.h 00:04:14.070 TEST_HEADER include/spdk/ftl.h 00:04:14.070 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.070 TEST_HEADER include/spdk/hexlify.h 00:04:14.070 TEST_HEADER include/spdk/histogram_data.h 00:04:14.070 TEST_HEADER include/spdk/idxd.h 00:04:14.070 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.070 TEST_HEADER include/spdk/init.h 00:04:14.070 TEST_HEADER include/spdk/ioat.h 00:04:14.070 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.070 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.070 TEST_HEADER include/spdk/json.h 00:04:14.071 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.071 TEST_HEADER include/spdk/keyring_module.h 00:04:14.071 TEST_HEADER include/spdk/keyring.h 00:04:14.071 TEST_HEADER include/spdk/likely.h 00:04:14.071 TEST_HEADER include/spdk/log.h 00:04:14.071 TEST_HEADER include/spdk/lvol.h 00:04:14.071 TEST_HEADER include/spdk/memory.h 00:04:14.071 TEST_HEADER include/spdk/mmio.h 00:04:14.071 TEST_HEADER include/spdk/net.h 00:04:14.071 TEST_HEADER include/spdk/nbd.h 00:04:14.071 TEST_HEADER include/spdk/notify.h 00:04:14.071 TEST_HEADER include/spdk/nvme.h 00:04:14.071 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.071 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.071 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.071 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.071 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.071 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.071 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.071 TEST_HEADER include/spdk/nvmf.h 00:04:14.071 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.071 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.071 TEST_HEADER include/spdk/opal.h 00:04:14.071 TEST_HEADER include/spdk/opal_spec.h 00:04:14.071 TEST_HEADER include/spdk/pci_ids.h 00:04:14.071 TEST_HEADER include/spdk/pipe.h 00:04:14.071 TEST_HEADER include/spdk/queue.h 00:04:14.071 TEST_HEADER include/spdk/reduce.h 00:04:14.071 TEST_HEADER include/spdk/rpc.h 00:04:14.071 TEST_HEADER include/spdk/scheduler.h 00:04:14.071 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.071 TEST_HEADER include/spdk/scsi.h 00:04:14.071 TEST_HEADER include/spdk/sock.h 00:04:14.071 TEST_HEADER include/spdk/stdinc.h 00:04:14.071 TEST_HEADER include/spdk/thread.h 00:04:14.071 TEST_HEADER include/spdk/string.h 00:04:14.071 TEST_HEADER include/spdk/trace.h 00:04:14.071 TEST_HEADER include/spdk/trace_parser.h 00:04:14.071 TEST_HEADER include/spdk/tree.h 00:04:14.071 TEST_HEADER include/spdk/util.h 00:04:14.071 TEST_HEADER include/spdk/ublk.h 00:04:14.071 TEST_HEADER include/spdk/uuid.h 00:04:14.071 TEST_HEADER include/spdk/version.h 00:04:14.071 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.071 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.071 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.071 TEST_HEADER include/spdk/vhost.h 00:04:14.071 TEST_HEADER include/spdk/vmd.h 00:04:14.071 TEST_HEADER include/spdk/xor.h 00:04:14.071 TEST_HEADER include/spdk/zipf.h 00:04:14.071 CXX test/cpp_headers/accel.o 00:04:14.071 CXX test/cpp_headers/accel_module.o 00:04:14.071 CXX test/cpp_headers/assert.o 00:04:14.071 CXX test/cpp_headers/barrier.o 00:04:14.071 CXX test/cpp_headers/base64.o 00:04:14.071 CXX test/cpp_headers/bdev.o 00:04:14.071 CXX test/cpp_headers/bdev_module.o 00:04:14.071 CXX test/cpp_headers/bdev_zone.o 00:04:14.071 CXX test/cpp_headers/bit_array.o 00:04:14.071 CXX test/cpp_headers/bit_pool.o 00:04:14.071 CXX test/cpp_headers/blob_bdev.o 00:04:14.071 CXX test/cpp_headers/blobfs_bdev.o 00:04:14.071 CXX test/cpp_headers/blobfs.o 00:04:14.071 CXX test/cpp_headers/blob.o 00:04:14.071 CXX test/cpp_headers/conf.o 00:04:14.071 CC app/spdk_dd/spdk_dd.o 00:04:14.071 CXX test/cpp_headers/config.o 00:04:14.071 CXX test/cpp_headers/cpuset.o 00:04:14.071 CXX test/cpp_headers/crc16.o 00:04:14.071 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.071 CC app/nvmf_tgt/nvmf_main.o 00:04:14.334 CXX test/cpp_headers/crc32.o 00:04:14.334 CC test/env/vtophys/vtophys.o 00:04:14.334 CC test/app/histogram_perf/histogram_perf.o 00:04:14.334 CC examples/util/zipf/zipf.o 00:04:14.334 CC test/env/memory/memory_ut.o 00:04:14.334 CC test/env/pci/pci_ut.o 00:04:14.334 CC test/app/jsoncat/jsoncat.o 00:04:14.334 CC app/spdk_tgt/spdk_tgt.o 00:04:14.334 CC examples/ioat/verify/verify.o 00:04:14.334 CC test/thread/poller_perf/poller_perf.o 00:04:14.334 CC test/app/stub/stub.o 00:04:14.334 CC app/fio/nvme/fio_plugin.o 00:04:14.334 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.334 CC examples/ioat/perf/perf.o 00:04:14.334 CC test/dma/test_dma/test_dma.o 00:04:14.334 CC test/app/bdev_svc/bdev_svc.o 00:04:14.334 CC app/fio/bdev/fio_plugin.o 00:04:14.334 LINK spdk_lspci 00:04:14.334 CC test/env/mem_callbacks/mem_callbacks.o 00:04:14.593 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.593 LINK rpc_client_test 00:04:14.593 LINK spdk_nvme_discover 00:04:14.593 LINK vtophys 00:04:14.593 LINK jsoncat 00:04:14.593 LINK histogram_perf 00:04:14.593 LINK interrupt_tgt 00:04:14.593 CXX test/cpp_headers/crc64.o 00:04:14.593 LINK poller_perf 00:04:14.593 LINK zipf 00:04:14.593 LINK nvmf_tgt 00:04:14.593 CXX test/cpp_headers/dif.o 00:04:14.593 CXX test/cpp_headers/dma.o 00:04:14.593 CXX test/cpp_headers/endian.o 00:04:14.593 CXX test/cpp_headers/env_dpdk.o 00:04:14.593 CXX test/cpp_headers/env.o 00:04:14.593 CXX test/cpp_headers/event.o 00:04:14.593 LINK env_dpdk_post_init 00:04:14.593 CXX test/cpp_headers/fd_group.o 00:04:14.593 CXX test/cpp_headers/fd.o 00:04:14.593 CXX test/cpp_headers/file.o 00:04:14.593 CXX test/cpp_headers/ftl.o 00:04:14.593 CXX test/cpp_headers/gpt_spec.o 00:04:14.593 CXX test/cpp_headers/hexlify.o 00:04:14.593 CXX test/cpp_headers/histogram_data.o 00:04:14.593 LINK stub 00:04:14.593 LINK iscsi_tgt 00:04:14.593 LINK spdk_trace_record 00:04:14.593 CXX test/cpp_headers/idxd.o 00:04:14.856 LINK ioat_perf 00:04:14.856 LINK verify 00:04:14.856 CXX test/cpp_headers/init.o 00:04:14.856 CXX test/cpp_headers/idxd_spec.o 00:04:14.856 LINK spdk_tgt 00:04:14.856 LINK bdev_svc 00:04:14.856 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.856 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.856 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.856 CXX test/cpp_headers/ioat.o 00:04:14.856 CXX test/cpp_headers/ioat_spec.o 00:04:14.856 CXX test/cpp_headers/iscsi_spec.o 00:04:14.856 CXX test/cpp_headers/json.o 00:04:14.856 LINK spdk_dd 00:04:14.856 CXX test/cpp_headers/jsonrpc.o 00:04:14.856 CXX test/cpp_headers/keyring.o 00:04:15.118 CXX test/cpp_headers/keyring_module.o 00:04:15.118 LINK spdk_trace 00:04:15.118 CXX test/cpp_headers/likely.o 00:04:15.118 CXX test/cpp_headers/log.o 00:04:15.118 CXX test/cpp_headers/lvol.o 00:04:15.118 CXX test/cpp_headers/memory.o 00:04:15.118 CXX test/cpp_headers/mmio.o 00:04:15.118 CXX test/cpp_headers/nbd.o 00:04:15.118 LINK pci_ut 00:04:15.118 CXX test/cpp_headers/net.o 00:04:15.118 CXX test/cpp_headers/notify.o 00:04:15.118 CXX test/cpp_headers/nvme.o 00:04:15.118 CXX test/cpp_headers/nvme_intel.o 00:04:15.118 CXX test/cpp_headers/nvme_ocssd.o 00:04:15.118 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:15.118 CXX test/cpp_headers/nvme_spec.o 00:04:15.118 CXX test/cpp_headers/nvme_zns.o 00:04:15.118 CXX test/cpp_headers/nvmf_cmd.o 00:04:15.118 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:15.118 CXX test/cpp_headers/nvmf.o 00:04:15.118 LINK test_dma 00:04:15.118 CXX test/cpp_headers/nvmf_spec.o 00:04:15.119 CXX test/cpp_headers/nvmf_transport.o 00:04:15.119 CXX test/cpp_headers/opal.o 00:04:15.119 CC test/event/event_perf/event_perf.o 00:04:15.119 CXX test/cpp_headers/opal_spec.o 00:04:15.380 CC test/event/reactor/reactor.o 00:04:15.380 CXX test/cpp_headers/pci_ids.o 00:04:15.380 CC test/event/reactor_perf/reactor_perf.o 00:04:15.380 LINK nvme_fuzz 00:04:15.380 CC examples/sock/hello_world/hello_sock.o 00:04:15.380 CC test/event/app_repeat/app_repeat.o 00:04:15.380 LINK spdk_nvme 00:04:15.380 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.380 CXX test/cpp_headers/pipe.o 00:04:15.380 CC examples/thread/thread/thread_ex.o 00:04:15.380 CC examples/idxd/perf/perf.o 00:04:15.380 CXX test/cpp_headers/queue.o 00:04:15.380 CXX test/cpp_headers/reduce.o 00:04:15.380 CC examples/vmd/led/led.o 00:04:15.380 CXX test/cpp_headers/rpc.o 00:04:15.380 CXX test/cpp_headers/scheduler.o 00:04:15.380 LINK spdk_bdev 00:04:15.380 CXX test/cpp_headers/scsi.o 00:04:15.380 CXX test/cpp_headers/scsi_spec.o 00:04:15.380 CXX test/cpp_headers/sock.o 00:04:15.380 CXX test/cpp_headers/stdinc.o 00:04:15.380 CXX test/cpp_headers/string.o 00:04:15.380 CXX test/cpp_headers/thread.o 00:04:15.380 CXX test/cpp_headers/trace.o 00:04:15.380 CC test/event/scheduler/scheduler.o 00:04:15.380 CXX test/cpp_headers/trace_parser.o 00:04:15.380 CXX test/cpp_headers/tree.o 00:04:15.380 CXX test/cpp_headers/ublk.o 00:04:15.380 CXX test/cpp_headers/util.o 00:04:15.380 CXX test/cpp_headers/uuid.o 00:04:15.380 CXX test/cpp_headers/version.o 00:04:15.667 CXX test/cpp_headers/vfio_user_pci.o 00:04:15.667 CXX test/cpp_headers/vfio_user_spec.o 00:04:15.667 LINK event_perf 00:04:15.667 CXX test/cpp_headers/vhost.o 00:04:15.667 CXX test/cpp_headers/vmd.o 00:04:15.667 CXX test/cpp_headers/xor.o 00:04:15.667 CXX test/cpp_headers/zipf.o 00:04:15.667 LINK reactor 00:04:15.667 LINK vhost_fuzz 00:04:15.667 LINK reactor_perf 00:04:15.667 LINK lsvmd 00:04:15.667 LINK app_repeat 00:04:15.667 LINK mem_callbacks 00:04:15.667 CC app/vhost/vhost.o 00:04:15.667 LINK led 00:04:15.667 LINK spdk_nvme_identify 00:04:15.667 LINK spdk_nvme_perf 00:04:15.667 LINK hello_sock 00:04:15.938 LINK spdk_top 00:04:15.938 LINK thread 00:04:15.938 CC test/nvme/e2edp/nvme_dp.o 00:04:15.938 CC test/nvme/startup/startup.o 00:04:15.938 CC test/nvme/aer/aer.o 00:04:15.938 CC test/nvme/sgl/sgl.o 00:04:15.938 CC test/nvme/reset/reset.o 00:04:15.938 CC test/nvme/reserve/reserve.o 00:04:15.938 CC test/nvme/err_injection/err_injection.o 00:04:15.938 CC test/nvme/overhead/overhead.o 00:04:15.939 CC test/nvme/simple_copy/simple_copy.o 00:04:15.939 LINK scheduler 00:04:15.939 CC test/nvme/connect_stress/connect_stress.o 00:04:15.939 CC test/accel/dif/dif.o 00:04:15.939 CC test/blobfs/mkfs/mkfs.o 00:04:15.939 CC test/nvme/compliance/nvme_compliance.o 00:04:15.939 CC test/nvme/fused_ordering/fused_ordering.o 00:04:15.939 CC test/nvme/boot_partition/boot_partition.o 00:04:15.939 CC test/nvme/cuse/cuse.o 00:04:15.939 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:15.939 CC test/nvme/fdp/fdp.o 00:04:15.939 LINK vhost 00:04:15.939 CC test/lvol/esnap/esnap.o 00:04:15.939 LINK idxd_perf 00:04:16.198 LINK err_injection 00:04:16.198 LINK startup 00:04:16.198 LINK connect_stress 00:04:16.198 LINK boot_partition 00:04:16.198 LINK mkfs 00:04:16.198 LINK fused_ordering 00:04:16.198 LINK reset 00:04:16.198 LINK overhead 00:04:16.198 CC examples/nvme/hotplug/hotplug.o 00:04:16.198 CC examples/nvme/hello_world/hello_world.o 00:04:16.198 CC examples/nvme/abort/abort.o 00:04:16.198 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.198 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.198 CC examples/nvme/arbitration/arbitration.o 00:04:16.198 CC examples/nvme/reconnect/reconnect.o 00:04:16.198 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:16.198 LINK reserve 00:04:16.198 LINK aer 00:04:16.198 CC examples/accel/perf/accel_perf.o 00:04:16.198 LINK doorbell_aers 00:04:16.456 LINK nvme_dp 00:04:16.456 LINK simple_copy 00:04:16.456 CC examples/blob/hello_world/hello_blob.o 00:04:16.456 CC examples/blob/cli/blobcli.o 00:04:16.456 LINK sgl 00:04:16.456 LINK nvme_compliance 00:04:16.456 LINK fdp 00:04:16.456 LINK memory_ut 00:04:16.456 LINK pmr_persistence 00:04:16.456 LINK dif 00:04:16.456 LINK hotplug 00:04:16.456 LINK cmb_copy 00:04:16.456 LINK hello_world 00:04:16.714 LINK hello_blob 00:04:16.714 LINK arbitration 00:04:16.714 LINK reconnect 00:04:16.714 LINK abort 00:04:16.714 LINK accel_perf 00:04:16.714 LINK nvme_manage 00:04:16.972 LINK blobcli 00:04:16.972 CC test/bdev/bdevio/bdevio.o 00:04:17.230 CC examples/bdev/hello_world/hello_bdev.o 00:04:17.230 CC examples/bdev/bdevperf/bdevperf.o 00:04:17.230 LINK iscsi_fuzz 00:04:17.488 LINK bdevio 00:04:17.488 LINK hello_bdev 00:04:17.488 LINK cuse 00:04:18.054 LINK bdevperf 00:04:18.312 CC examples/nvmf/nvmf/nvmf.o 00:04:18.569 LINK nvmf 00:04:21.097 LINK esnap 00:04:21.356 00:04:21.356 real 0m49.370s 00:04:21.356 user 10m5.846s 00:04:21.356 sys 2m27.233s 00:04:21.356 07:09:53 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:21.356 07:09:53 make -- common/autotest_common.sh@10 -- $ set +x 00:04:21.356 ************************************ 00:04:21.356 END TEST make 00:04:21.356 ************************************ 00:04:21.356 07:09:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:21.356 07:09:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:21.356 07:09:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:21.356 07:09:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:21.356 07:09:53 -- pm/common@44 -- $ pid=2264024 00:04:21.356 07:09:53 -- pm/common@50 -- $ kill -TERM 2264024 00:04:21.356 07:09:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:21.356 07:09:53 -- pm/common@44 -- $ pid=2264026 00:04:21.356 07:09:53 -- pm/common@50 -- $ kill -TERM 2264026 00:04:21.356 07:09:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:21.356 07:09:53 -- pm/common@44 -- $ pid=2264028 00:04:21.356 07:09:53 -- pm/common@50 -- $ kill -TERM 2264028 00:04:21.356 07:09:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:21.356 07:09:53 -- pm/common@44 -- $ pid=2264055 00:04:21.356 07:09:53 -- pm/common@50 -- $ sudo -E kill -TERM 2264055 00:04:21.356 07:09:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.356 07:09:53 -- nvmf/common.sh@7 -- # uname -s 00:04:21.356 07:09:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.356 07:09:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.356 07:09:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.356 07:09:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.356 07:09:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.356 07:09:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.356 07:09:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.356 07:09:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.356 07:09:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.356 07:09:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.356 07:09:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:21.356 07:09:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:21.356 07:09:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.356 07:09:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.356 07:09:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:21.356 07:09:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.356 07:09:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.356 07:09:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.356 07:09:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.356 07:09:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.356 07:09:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.356 07:09:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.356 07:09:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.356 07:09:53 -- paths/export.sh@5 -- # export PATH 00:04:21.356 07:09:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.356 07:09:53 -- nvmf/common.sh@47 -- # : 0 00:04:21.356 07:09:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:21.356 07:09:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:21.356 07:09:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.356 07:09:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.356 07:09:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.356 07:09:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:21.356 07:09:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:21.356 07:09:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:21.356 07:09:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:21.356 07:09:53 -- spdk/autotest.sh@32 -- # uname -s 00:04:21.356 07:09:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:21.356 07:09:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:21.356 07:09:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:21.356 07:09:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:21.356 07:09:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:21.356 07:09:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:21.356 07:09:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:21.356 07:09:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:21.356 07:09:53 -- spdk/autotest.sh@48 -- # udevadm_pid=2320124 00:04:21.356 07:09:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:21.356 07:09:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:21.356 07:09:53 -- pm/common@17 -- # local monitor 00:04:21.356 07:09:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@21 -- # date +%s 00:04:21.356 07:09:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.356 07:09:53 -- pm/common@21 -- # date +%s 00:04:21.356 07:09:53 -- pm/common@25 -- # sleep 1 00:04:21.356 07:09:53 -- pm/common@21 -- # date +%s 00:04:21.356 07:09:53 -- pm/common@21 -- # date +%s 00:04:21.356 07:09:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884193 00:04:21.356 07:09:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884193 00:04:21.356 07:09:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884193 00:04:21.356 07:09:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884193 00:04:21.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884193_collect-vmstat.pm.log 00:04:21.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884193_collect-cpu-load.pm.log 00:04:21.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884193_collect-cpu-temp.pm.log 00:04:21.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884193_collect-bmc-pm.bmc.pm.log 00:04:22.730 07:09:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:22.730 07:09:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:22.730 07:09:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.730 07:09:54 -- common/autotest_common.sh@10 -- # set +x 00:04:22.730 07:09:54 -- spdk/autotest.sh@59 -- # create_test_list 00:04:22.730 07:09:54 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:22.730 07:09:54 -- common/autotest_common.sh@10 -- # set +x 00:04:22.730 07:09:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:22.730 07:09:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.730 07:09:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.730 07:09:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:22.730 07:09:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.730 07:09:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:22.730 07:09:54 -- common/autotest_common.sh@1455 -- # uname 00:04:22.730 07:09:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:22.730 07:09:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:22.730 07:09:54 -- common/autotest_common.sh@1475 -- # uname 00:04:22.730 07:09:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:22.730 07:09:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:22.730 07:09:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:22.730 07:09:54 -- spdk/autotest.sh@72 -- # hash lcov 00:04:22.730 07:09:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:22.730 07:09:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:22.730 --rc lcov_branch_coverage=1 00:04:22.730 --rc lcov_function_coverage=1 00:04:22.730 --rc genhtml_branch_coverage=1 00:04:22.730 --rc genhtml_function_coverage=1 00:04:22.730 --rc genhtml_legend=1 00:04:22.730 --rc geninfo_all_blocks=1 00:04:22.730 ' 00:04:22.730 07:09:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:22.730 --rc lcov_branch_coverage=1 00:04:22.730 --rc lcov_function_coverage=1 00:04:22.730 --rc genhtml_branch_coverage=1 00:04:22.730 --rc genhtml_function_coverage=1 00:04:22.730 --rc genhtml_legend=1 00:04:22.730 --rc geninfo_all_blocks=1 00:04:22.730 ' 00:04:22.730 07:09:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:22.730 --rc lcov_branch_coverage=1 00:04:22.730 --rc lcov_function_coverage=1 00:04:22.730 --rc genhtml_branch_coverage=1 00:04:22.730 --rc genhtml_function_coverage=1 00:04:22.730 --rc genhtml_legend=1 00:04:22.730 --rc geninfo_all_blocks=1 00:04:22.730 --no-external' 00:04:22.730 07:09:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:22.730 --rc lcov_branch_coverage=1 00:04:22.730 --rc lcov_function_coverage=1 00:04:22.730 --rc genhtml_branch_coverage=1 00:04:22.730 --rc genhtml_function_coverage=1 00:04:22.730 --rc genhtml_legend=1 00:04:22.730 --rc geninfo_all_blocks=1 00:04:22.730 --no-external' 00:04:22.730 07:09:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:22.730 lcov: LCOV version 1.14 00:04:22.730 07:09:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:24.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:24.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:24.104 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:24.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:24.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:24.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:24.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:24.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:24.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:24.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:24.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:24.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:24.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:24.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:24.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:24.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:24.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:24.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:42.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:42.435 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.510 07:10:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:00.510 07:10:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.511 07:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 07:10:31 -- spdk/autotest.sh@91 -- # rm -f 00:05:00.511 07:10:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.511 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:00.511 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:00.511 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:00.511 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:00.511 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:00.511 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:00.511 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:00.511 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:00.511 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:00.511 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:00.511 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:00.511 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:00.511 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:00.511 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:00.769 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:00.769 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:00.769 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:00.769 07:10:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:00.769 07:10:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:00.769 07:10:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:00.769 07:10:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:00.769 07:10:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.769 07:10:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:00.769 07:10:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:00.769 07:10:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.769 07:10:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.769 07:10:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:00.769 07:10:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.769 07:10:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:00.769 07:10:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:00.769 07:10:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:00.769 07:10:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.769 No valid GPT data, bailing 00:05:00.769 07:10:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.769 07:10:33 -- scripts/common.sh@391 -- # pt= 00:05:00.769 07:10:33 -- scripts/common.sh@392 -- # return 1 00:05:00.769 07:10:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.769 1+0 records in 00:05:00.769 1+0 records out 00:05:00.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00275963 s, 380 MB/s 00:05:00.769 07:10:33 -- spdk/autotest.sh@118 -- # sync 00:05:00.769 07:10:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.769 07:10:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.769 07:10:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:02.670 07:10:34 -- spdk/autotest.sh@124 -- # uname -s 00:05:02.670 07:10:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:02.670 07:10:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:02.670 07:10:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.670 07:10:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.670 07:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:02.670 ************************************ 00:05:02.670 START TEST setup.sh 00:05:02.670 ************************************ 00:05:02.670 07:10:34 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:02.670 * Looking for test storage... 00:05:02.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.670 07:10:35 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:02.670 07:10:35 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:02.670 07:10:35 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:02.670 07:10:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.670 07:10:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.670 07:10:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.670 ************************************ 00:05:02.670 START TEST acl 00:05:02.670 ************************************ 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:02.670 * Looking for test storage... 00:05:02.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.670 07:10:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.670 07:10:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.670 07:10:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:02.670 07:10:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:02.670 07:10:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:02.670 07:10:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:02.670 07:10:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:02.670 07:10:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.670 07:10:35 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.567 07:10:36 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:04.567 07:10:36 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:04.567 07:10:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.567 07:10:36 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:04.567 07:10:36 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.567 07:10:36 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:05.132 Hugepages 00:05:05.132 node hugesize free / total 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.132 00:05:05.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.132 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.389 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:05.389 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:05.390 07:10:37 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:05.390 07:10:37 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.390 07:10:37 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.390 07:10:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:05.390 ************************************ 00:05:05.390 START TEST denied 00:05:05.390 ************************************ 00:05:05.390 07:10:37 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:05.390 07:10:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:05:05.390 07:10:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:05.390 07:10:37 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:05:05.390 07:10:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.390 07:10:37 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.763 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:05:06.763 07:10:39 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:05:06.763 07:10:39 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:06.763 07:10:39 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:06.763 07:10:39 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:05:06.763 07:10:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:05:06.763 07:10:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:07.020 07:10:39 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:07.020 07:10:39 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:07.020 07:10:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.020 07:10:39 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.548 00:05:09.548 real 0m3.822s 00:05:09.548 user 0m1.110s 00:05:09.548 sys 0m1.788s 00:05:09.548 07:10:41 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.548 07:10:41 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:09.548 ************************************ 00:05:09.548 END TEST denied 00:05:09.548 ************************************ 00:05:09.548 07:10:41 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:09.548 07:10:41 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.548 07:10:41 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.548 07:10:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:09.548 ************************************ 00:05:09.548 START TEST allowed 00:05:09.548 ************************************ 00:05:09.548 07:10:41 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:09.548 07:10:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:05:09.548 07:10:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:09.548 07:10:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:05:09.548 07:10:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.548 07:10:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.447 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.448 07:10:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:11.448 07:10:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:11.448 07:10:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:11.448 07:10:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.448 07:10:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.349 00:05:13.349 real 0m3.828s 00:05:13.349 user 0m0.946s 00:05:13.349 sys 0m1.715s 00:05:13.349 07:10:45 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.349 07:10:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:13.349 ************************************ 00:05:13.349 END TEST allowed 00:05:13.349 ************************************ 00:05:13.349 00:05:13.349 real 0m10.466s 00:05:13.349 user 0m3.096s 00:05:13.349 sys 0m5.361s 00:05:13.349 07:10:45 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.349 07:10:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:13.349 ************************************ 00:05:13.349 END TEST acl 00:05:13.349 ************************************ 00:05:13.349 07:10:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:13.349 07:10:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.349 07:10:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.349 07:10:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.349 ************************************ 00:05:13.349 START TEST hugepages 00:05:13.349 ************************************ 00:05:13.349 07:10:45 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:13.349 * Looking for test storage... 00:05:13.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:13.349 07:10:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43716628 kB' 'MemAvailable: 47222472 kB' 'Buffers: 2704 kB' 'Cached: 10293468 kB' 'SwapCached: 0 kB' 'Active: 7286072 kB' 'Inactive: 3508308 kB' 'Active(anon): 6889440 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501628 kB' 'Mapped: 190044 kB' 'Shmem: 6391232 kB' 'KReclaimable: 189484 kB' 'Slab: 564732 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 375248 kB' 'KernelStack: 13088 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 8041472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.350 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:13.351 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:13.352 07:10:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:13.352 07:10:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.352 07:10:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.352 07:10:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.352 ************************************ 00:05:13.352 START TEST default_setup 00:05:13.352 ************************************ 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.352 07:10:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.726 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:14.726 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:14.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:15.664 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45790280 kB' 'MemAvailable: 49296124 kB' 'Buffers: 2704 kB' 'Cached: 10293560 kB' 'SwapCached: 0 kB' 'Active: 7303996 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907364 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519272 kB' 'Mapped: 189700 kB' 'Shmem: 6391324 kB' 'KReclaimable: 189484 kB' 'Slab: 563872 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374388 kB' 'KernelStack: 12784 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.664 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.665 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45798808 kB' 'MemAvailable: 49304652 kB' 'Buffers: 2704 kB' 'Cached: 10293564 kB' 'SwapCached: 0 kB' 'Active: 7304280 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907648 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519596 kB' 'Mapped: 189684 kB' 'Shmem: 6391328 kB' 'KReclaimable: 189484 kB' 'Slab: 563872 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374388 kB' 'KernelStack: 12816 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.666 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.667 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45798276 kB' 'MemAvailable: 49304120 kB' 'Buffers: 2704 kB' 'Cached: 10293580 kB' 'SwapCached: 0 kB' 'Active: 7304124 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907492 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519400 kB' 'Mapped: 189624 kB' 'Shmem: 6391344 kB' 'KReclaimable: 189484 kB' 'Slab: 563900 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374416 kB' 'KernelStack: 12896 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.668 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.669 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.670 nr_hugepages=1024 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.670 resv_hugepages=0 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.670 surplus_hugepages=0 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.670 anon_hugepages=0 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45798276 kB' 'MemAvailable: 49304120 kB' 'Buffers: 2704 kB' 'Cached: 10293604 kB' 'SwapCached: 0 kB' 'Active: 7304148 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907516 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519400 kB' 'Mapped: 189624 kB' 'Shmem: 6391368 kB' 'KReclaimable: 189484 kB' 'Slab: 563900 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374416 kB' 'KernelStack: 12896 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.670 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.671 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20486680 kB' 'MemUsed: 12390260 kB' 'SwapCached: 0 kB' 'Active: 6011376 kB' 'Inactive: 3261456 kB' 'Active(anon): 5798184 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898044 kB' 'Mapped: 127220 kB' 'AnonPages: 377916 kB' 'Shmem: 5423396 kB' 'KernelStack: 7752 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356328 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.672 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.673 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.674 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.674 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.674 node0=1024 expecting 1024 00:05:15.674 07:10:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.674 00:05:15.674 real 0m2.419s 00:05:15.674 user 0m0.646s 00:05:15.674 sys 0m0.895s 00:05:15.674 07:10:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.674 07:10:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 ************************************ 00:05:15.674 END TEST default_setup 00:05:15.674 ************************************ 00:05:15.674 07:10:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:15.674 07:10:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.674 07:10:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.674 07:10:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 ************************************ 00:05:15.674 START TEST per_node_1G_alloc 00:05:15.674 ************************************ 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.674 07:10:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.052 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.052 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:17.052 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.052 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.052 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.052 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.052 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.052 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.052 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.052 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.052 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.052 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.052 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.052 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.052 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.052 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.052 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45798784 kB' 'MemAvailable: 49304628 kB' 'Buffers: 2704 kB' 'Cached: 10293672 kB' 'SwapCached: 0 kB' 'Active: 7304500 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907868 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519640 kB' 'Mapped: 189636 kB' 'Shmem: 6391436 kB' 'KReclaimable: 189484 kB' 'Slab: 563908 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374424 kB' 'KernelStack: 12848 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.052 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.053 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45801484 kB' 'MemAvailable: 49307328 kB' 'Buffers: 2704 kB' 'Cached: 10293676 kB' 'SwapCached: 0 kB' 'Active: 7304344 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907712 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519508 kB' 'Mapped: 189620 kB' 'Shmem: 6391440 kB' 'KReclaimable: 189484 kB' 'Slab: 563900 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374416 kB' 'KernelStack: 12880 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.054 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.055 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45801268 kB' 'MemAvailable: 49307112 kB' 'Buffers: 2704 kB' 'Cached: 10293700 kB' 'SwapCached: 0 kB' 'Active: 7304268 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907636 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519468 kB' 'Mapped: 189692 kB' 'Shmem: 6391464 kB' 'KReclaimable: 189484 kB' 'Slab: 563972 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374488 kB' 'KernelStack: 12880 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.057 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.058 nr_hugepages=1024 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.058 resv_hugepages=0 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.058 surplus_hugepages=0 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.058 anon_hugepages=0 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45801364 kB' 'MemAvailable: 49307208 kB' 'Buffers: 2704 kB' 'Cached: 10293724 kB' 'SwapCached: 0 kB' 'Active: 7304396 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907764 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519568 kB' 'Mapped: 189632 kB' 'Shmem: 6391488 kB' 'KReclaimable: 189484 kB' 'Slab: 563972 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374488 kB' 'KernelStack: 12896 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.059 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21546184 kB' 'MemUsed: 11330756 kB' 'SwapCached: 0 kB' 'Active: 6011376 kB' 'Inactive: 3261456 kB' 'Active(anon): 5798184 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898056 kB' 'Mapped: 127228 kB' 'AnonPages: 377924 kB' 'Shmem: 5423408 kB' 'KernelStack: 7768 kB' 'PageTables: 5448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356332 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.060 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.320 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.320 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.320 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.320 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.320 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.320 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24255696 kB' 'MemUsed: 3409092 kB' 'SwapCached: 0 kB' 'Active: 1292788 kB' 'Inactive: 246852 kB' 'Active(anon): 1109348 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1398412 kB' 'Mapped: 62404 kB' 'AnonPages: 141340 kB' 'Shmem: 968120 kB' 'KernelStack: 5096 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67952 kB' 'Slab: 207640 kB' 'SReclaimable: 67952 kB' 'SUnreclaim: 139688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.321 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.322 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.323 node0=512 expecting 512 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:17.323 node1=512 expecting 512 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:17.323 00:05:17.323 real 0m1.442s 00:05:17.323 user 0m0.586s 00:05:17.323 sys 0m0.819s 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.323 07:10:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.323 ************************************ 00:05:17.323 END TEST per_node_1G_alloc 00:05:17.323 ************************************ 00:05:17.323 07:10:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:17.323 07:10:49 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.323 07:10:49 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.323 07:10:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.323 ************************************ 00:05:17.323 START TEST even_2G_alloc 00:05:17.323 ************************************ 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.323 07:10:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:18.257 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:18.257 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:18.257 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:18.257 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:18.257 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:18.257 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:18.257 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:18.257 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:18.257 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:18.257 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:18.257 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:18.257 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:18.257 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:18.257 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:18.257 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:18.257 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:18.257 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45809156 kB' 'MemAvailable: 49315000 kB' 'Buffers: 2704 kB' 'Cached: 10293812 kB' 'SwapCached: 0 kB' 'Active: 7304892 kB' 'Inactive: 3508308 kB' 'Active(anon): 6908260 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519924 kB' 'Mapped: 189644 kB' 'Shmem: 6391576 kB' 'KReclaimable: 189484 kB' 'Slab: 563796 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374312 kB' 'KernelStack: 12864 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.520 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:18.521 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45809420 kB' 'MemAvailable: 49315264 kB' 'Buffers: 2704 kB' 'Cached: 10293816 kB' 'SwapCached: 0 kB' 'Active: 7304760 kB' 'Inactive: 3508308 kB' 'Active(anon): 6908128 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519836 kB' 'Mapped: 189636 kB' 'Shmem: 6391580 kB' 'KReclaimable: 189484 kB' 'Slab: 563764 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374280 kB' 'KernelStack: 12896 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.522 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.523 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45809128 kB' 'MemAvailable: 49314972 kB' 'Buffers: 2704 kB' 'Cached: 10293832 kB' 'SwapCached: 0 kB' 'Active: 7304808 kB' 'Inactive: 3508308 kB' 'Active(anon): 6908176 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519804 kB' 'Mapped: 189636 kB' 'Shmem: 6391596 kB' 'KReclaimable: 189484 kB' 'Slab: 563864 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374380 kB' 'KernelStack: 12928 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.524 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.525 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.526 nr_hugepages=1024 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.526 resv_hugepages=0 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.526 surplus_hugepages=0 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.526 anon_hugepages=0 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45809128 kB' 'MemAvailable: 49314972 kB' 'Buffers: 2704 kB' 'Cached: 10293856 kB' 'SwapCached: 0 kB' 'Active: 7304832 kB' 'Inactive: 3508308 kB' 'Active(anon): 6908200 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519808 kB' 'Mapped: 189636 kB' 'Shmem: 6391620 kB' 'KReclaimable: 189484 kB' 'Slab: 563864 kB' 'SReclaimable: 189484 kB' 'SUnreclaim: 374380 kB' 'KernelStack: 12928 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8062700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.526 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.527 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21537848 kB' 'MemUsed: 11339092 kB' 'SwapCached: 0 kB' 'Active: 6011908 kB' 'Inactive: 3261456 kB' 'Active(anon): 5798716 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898060 kB' 'Mapped: 127232 kB' 'AnonPages: 378448 kB' 'Shmem: 5423412 kB' 'KernelStack: 7800 kB' 'PageTables: 5496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356272 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.528 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.788 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24271280 kB' 'MemUsed: 3393508 kB' 'SwapCached: 0 kB' 'Active: 1293020 kB' 'Inactive: 246852 kB' 'Active(anon): 1109580 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1398540 kB' 'Mapped: 62404 kB' 'AnonPages: 141392 kB' 'Shmem: 968248 kB' 'KernelStack: 5112 kB' 'PageTables: 2744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67952 kB' 'Slab: 207592 kB' 'SReclaimable: 67952 kB' 'SUnreclaim: 139640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.789 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.790 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:18.791 node0=512 expecting 512 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:18.791 node1=512 expecting 512 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:18.791 00:05:18.791 real 0m1.427s 00:05:18.791 user 0m0.590s 00:05:18.791 sys 0m0.799s 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.791 07:10:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:18.791 ************************************ 00:05:18.791 END TEST even_2G_alloc 00:05:18.791 ************************************ 00:05:18.791 07:10:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:18.791 07:10:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.791 07:10:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.791 07:10:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.791 ************************************ 00:05:18.791 START TEST odd_alloc 00:05:18.791 ************************************ 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.791 07:10:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.725 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:19.726 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:19.726 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:19.726 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:19.726 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:19.726 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:19.726 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:19.726 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:19.726 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.726 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:19.726 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:19.726 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:19.726 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:19.726 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:19.726 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:19.726 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:19.726 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45815212 kB' 'MemAvailable: 49321052 kB' 'Buffers: 2704 kB' 'Cached: 10293936 kB' 'SwapCached: 0 kB' 'Active: 7303760 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907128 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518724 kB' 'Mapped: 189400 kB' 'Shmem: 6391700 kB' 'KReclaimable: 189476 kB' 'Slab: 563524 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374048 kB' 'KernelStack: 12928 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 8051852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.990 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45811268 kB' 'MemAvailable: 49317108 kB' 'Buffers: 2704 kB' 'Cached: 10293940 kB' 'SwapCached: 0 kB' 'Active: 7306852 kB' 'Inactive: 3508308 kB' 'Active(anon): 6910220 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521780 kB' 'Mapped: 189344 kB' 'Shmem: 6391704 kB' 'KReclaimable: 189476 kB' 'Slab: 563524 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374048 kB' 'KernelStack: 13232 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 8055748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.991 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.992 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45805700 kB' 'MemAvailable: 49311540 kB' 'Buffers: 2704 kB' 'Cached: 10293952 kB' 'SwapCached: 0 kB' 'Active: 7308636 kB' 'Inactive: 3508308 kB' 'Active(anon): 6912004 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523492 kB' 'Mapped: 189612 kB' 'Shmem: 6391716 kB' 'KReclaimable: 189476 kB' 'Slab: 563528 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374052 kB' 'KernelStack: 13472 kB' 'PageTables: 9716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 8057760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.993 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.994 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:19.995 nr_hugepages=1025 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.995 resv_hugepages=0 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.995 surplus_hugepages=0 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.995 anon_hugepages=0 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45805780 kB' 'MemAvailable: 49311620 kB' 'Buffers: 2704 kB' 'Cached: 10293952 kB' 'SwapCached: 0 kB' 'Active: 7304400 kB' 'Inactive: 3508308 kB' 'Active(anon): 6907768 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519660 kB' 'Mapped: 189620 kB' 'Shmem: 6391716 kB' 'KReclaimable: 189476 kB' 'Slab: 563528 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374052 kB' 'KernelStack: 13248 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 8053940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.995 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.996 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21541600 kB' 'MemUsed: 11335340 kB' 'SwapCached: 0 kB' 'Active: 6009336 kB' 'Inactive: 3261456 kB' 'Active(anon): 5796144 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898128 kB' 'Mapped: 126916 kB' 'AnonPages: 375768 kB' 'Shmem: 5423480 kB' 'KernelStack: 7704 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356160 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.997 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24257700 kB' 'MemUsed: 3407088 kB' 'SwapCached: 0 kB' 'Active: 1298084 kB' 'Inactive: 246852 kB' 'Active(anon): 1114644 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1398576 kB' 'Mapped: 62496 kB' 'AnonPages: 146400 kB' 'Shmem: 968284 kB' 'KernelStack: 5176 kB' 'PageTables: 2744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67944 kB' 'Slab: 207352 kB' 'SReclaimable: 67944 kB' 'SUnreclaim: 139408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.998 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.999 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.000 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:20.258 node0=512 expecting 513 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:20.258 node1=513 expecting 512 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:20.258 00:05:20.258 real 0m1.389s 00:05:20.258 user 0m0.587s 00:05:20.258 sys 0m0.761s 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.258 07:10:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.258 ************************************ 00:05:20.258 END TEST odd_alloc 00:05:20.258 ************************************ 00:05:20.258 07:10:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:20.258 07:10:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.258 07:10:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.258 07:10:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.258 ************************************ 00:05:20.258 START TEST custom_alloc 00:05:20.258 ************************************ 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.258 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.259 07:10:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.192 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:21.192 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:21.192 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:21.192 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:21.192 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:21.192 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:21.192 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:21.192 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:21.192 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:21.192 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:21.192 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:21.192 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:21.192 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:21.192 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:21.192 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:21.192 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:21.192 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:21.454 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:21.454 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:21.454 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44735900 kB' 'MemAvailable: 48241740 kB' 'Buffers: 2704 kB' 'Cached: 10294072 kB' 'SwapCached: 0 kB' 'Active: 7301056 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904424 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515844 kB' 'Mapped: 187924 kB' 'Shmem: 6391836 kB' 'KReclaimable: 189476 kB' 'Slab: 563720 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374244 kB' 'KernelStack: 12832 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 8015256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.455 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44735980 kB' 'MemAvailable: 48241820 kB' 'Buffers: 2704 kB' 'Cached: 10294076 kB' 'SwapCached: 0 kB' 'Active: 7300924 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904292 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515684 kB' 'Mapped: 187924 kB' 'Shmem: 6391840 kB' 'KReclaimable: 189476 kB' 'Slab: 563732 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374256 kB' 'KernelStack: 12832 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 8015276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.456 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.457 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44736700 kB' 'MemAvailable: 48242540 kB' 'Buffers: 2704 kB' 'Cached: 10294092 kB' 'SwapCached: 0 kB' 'Active: 7300864 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904232 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515584 kB' 'Mapped: 187876 kB' 'Shmem: 6391856 kB' 'KReclaimable: 189476 kB' 'Slab: 563740 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374264 kB' 'KernelStack: 12816 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 8015296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.458 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.459 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:21.460 nr_hugepages=1536 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.460 resv_hugepages=0 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.460 surplus_hugepages=0 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.460 anon_hugepages=0 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44737368 kB' 'MemAvailable: 48243208 kB' 'Buffers: 2704 kB' 'Cached: 10294112 kB' 'SwapCached: 0 kB' 'Active: 7300948 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904316 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515644 kB' 'Mapped: 187876 kB' 'Shmem: 6391876 kB' 'KReclaimable: 189476 kB' 'Slab: 563740 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374264 kB' 'KernelStack: 12848 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 8015316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:21.460 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.461 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21525568 kB' 'MemUsed: 11351372 kB' 'SwapCached: 0 kB' 'Active: 6010372 kB' 'Inactive: 3261456 kB' 'Active(anon): 5797180 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898256 kB' 'Mapped: 125608 kB' 'AnonPages: 376748 kB' 'Shmem: 5423608 kB' 'KernelStack: 7720 kB' 'PageTables: 5268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356404 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.462 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.463 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.722 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23211548 kB' 'MemUsed: 4453240 kB' 'SwapCached: 0 kB' 'Active: 1290596 kB' 'Inactive: 246852 kB' 'Active(anon): 1107156 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1398584 kB' 'Mapped: 62268 kB' 'AnonPages: 138900 kB' 'Shmem: 968292 kB' 'KernelStack: 5128 kB' 'PageTables: 2600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67944 kB' 'Slab: 207336 kB' 'SReclaimable: 67944 kB' 'SUnreclaim: 139392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.723 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.724 node0=512 expecting 512 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:21.724 node1=1024 expecting 1024 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:21.724 00:05:21.724 real 0m1.446s 00:05:21.724 user 0m0.578s 00:05:21.724 sys 0m0.831s 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.724 07:10:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.724 ************************************ 00:05:21.724 END TEST custom_alloc 00:05:21.724 ************************************ 00:05:21.724 07:10:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:21.724 07:10:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.724 07:10:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.724 07:10:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.724 ************************************ 00:05:21.724 START TEST no_shrink_alloc 00:05:21.724 ************************************ 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.724 07:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.658 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:22.658 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:22.658 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:22.658 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:22.658 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:22.658 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:22.658 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:22.658 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:22.658 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:22.919 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:22.919 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:22.919 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:22.919 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:22.919 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:22.919 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:22.919 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:22.919 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.919 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45754456 kB' 'MemAvailable: 49260296 kB' 'Buffers: 2704 kB' 'Cached: 10294200 kB' 'SwapCached: 0 kB' 'Active: 7301836 kB' 'Inactive: 3508308 kB' 'Active(anon): 6905204 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516096 kB' 'Mapped: 188064 kB' 'Shmem: 6391964 kB' 'KReclaimable: 189476 kB' 'Slab: 563660 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374184 kB' 'KernelStack: 12816 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.920 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45758492 kB' 'MemAvailable: 49264332 kB' 'Buffers: 2704 kB' 'Cached: 10294200 kB' 'SwapCached: 0 kB' 'Active: 7301356 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904724 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515988 kB' 'Mapped: 188048 kB' 'Shmem: 6391964 kB' 'KReclaimable: 189476 kB' 'Slab: 563660 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374184 kB' 'KernelStack: 12848 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.921 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.922 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45759420 kB' 'MemAvailable: 49265260 kB' 'Buffers: 2704 kB' 'Cached: 10294224 kB' 'SwapCached: 0 kB' 'Active: 7301116 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904484 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515736 kB' 'Mapped: 187876 kB' 'Shmem: 6391988 kB' 'KReclaimable: 189476 kB' 'Slab: 563724 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374248 kB' 'KernelStack: 12864 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.923 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.924 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.925 nr_hugepages=1024 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.925 resv_hugepages=0 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.925 surplus_hugepages=0 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.925 anon_hugepages=0 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.925 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45759420 kB' 'MemAvailable: 49265260 kB' 'Buffers: 2704 kB' 'Cached: 10294224 kB' 'SwapCached: 0 kB' 'Active: 7300836 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904204 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515460 kB' 'Mapped: 187876 kB' 'Shmem: 6391988 kB' 'KReclaimable: 189476 kB' 'Slab: 563724 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374248 kB' 'KernelStack: 12864 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.187 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.188 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20463156 kB' 'MemUsed: 12413784 kB' 'SwapCached: 0 kB' 'Active: 6010196 kB' 'Inactive: 3261456 kB' 'Active(anon): 5797004 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898316 kB' 'Mapped: 125608 kB' 'AnonPages: 376512 kB' 'Shmem: 5423668 kB' 'KernelStack: 7704 kB' 'PageTables: 5260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356328 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.189 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.190 node0=1024 expecting 1024 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.190 07:10:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:24.142 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:24.142 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:24.142 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:24.142 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:24.142 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:24.142 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:24.142 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:24.142 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:24.142 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:24.142 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:24.142 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:24.143 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:24.143 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:24.143 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:24.143 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:24.143 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:24.143 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:24.406 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.406 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45760188 kB' 'MemAvailable: 49266028 kB' 'Buffers: 2704 kB' 'Cached: 10294308 kB' 'SwapCached: 0 kB' 'Active: 7301316 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904684 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515792 kB' 'Mapped: 187956 kB' 'Shmem: 6392072 kB' 'KReclaimable: 189476 kB' 'Slab: 563768 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374292 kB' 'KernelStack: 12880 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.407 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45760940 kB' 'MemAvailable: 49266780 kB' 'Buffers: 2704 kB' 'Cached: 10294312 kB' 'SwapCached: 0 kB' 'Active: 7301352 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904720 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515816 kB' 'Mapped: 187892 kB' 'Shmem: 6392076 kB' 'KReclaimable: 189476 kB' 'Slab: 563760 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374284 kB' 'KernelStack: 12880 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.408 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.409 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45761052 kB' 'MemAvailable: 49266892 kB' 'Buffers: 2704 kB' 'Cached: 10294332 kB' 'SwapCached: 0 kB' 'Active: 7301372 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904740 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515832 kB' 'Mapped: 187892 kB' 'Shmem: 6392096 kB' 'KReclaimable: 189476 kB' 'Slab: 563748 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374272 kB' 'KernelStack: 12896 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.410 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.411 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.412 nr_hugepages=1024 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.412 resv_hugepages=0 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.412 surplus_hugepages=0 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.412 anon_hugepages=0 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.412 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45761780 kB' 'MemAvailable: 49267620 kB' 'Buffers: 2704 kB' 'Cached: 10294352 kB' 'SwapCached: 0 kB' 'Active: 7301372 kB' 'Inactive: 3508308 kB' 'Active(anon): 6904740 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515828 kB' 'Mapped: 187892 kB' 'Shmem: 6392116 kB' 'KReclaimable: 189476 kB' 'Slab: 563748 kB' 'SReclaimable: 189476 kB' 'SUnreclaim: 374272 kB' 'KernelStack: 12896 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 8015688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2174556 kB' 'DirectMap2M: 16619520 kB' 'DirectMap1G: 50331648 kB' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.413 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.414 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20460268 kB' 'MemUsed: 12416672 kB' 'SwapCached: 0 kB' 'Active: 6010188 kB' 'Inactive: 3261456 kB' 'Active(anon): 5796996 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8898316 kB' 'Mapped: 125624 kB' 'AnonPages: 376472 kB' 'Shmem: 5423668 kB' 'KernelStack: 7704 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121532 kB' 'Slab: 356308 kB' 'SReclaimable: 121532 kB' 'SUnreclaim: 234776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.415 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.416 node0=1024 expecting 1024 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.416 00:05:24.416 real 0m2.855s 00:05:24.416 user 0m1.221s 00:05:24.416 sys 0m1.560s 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.416 07:10:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:24.416 ************************************ 00:05:24.416 END TEST no_shrink_alloc 00:05:24.416 ************************************ 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:24.674 07:10:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:24.674 00:05:24.674 real 0m11.372s 00:05:24.675 user 0m4.371s 00:05:24.675 sys 0m5.918s 00:05:24.675 07:10:56 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.675 07:10:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.675 ************************************ 00:05:24.675 END TEST hugepages 00:05:24.675 ************************************ 00:05:24.675 07:10:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:24.675 07:10:56 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.675 07:10:56 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.675 07:10:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.675 ************************************ 00:05:24.675 START TEST driver 00:05:24.675 ************************************ 00:05:24.675 07:10:56 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:24.675 * Looking for test storage... 00:05:24.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:24.675 07:10:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:24.675 07:10:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.675 07:10:57 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.209 07:10:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:27.209 07:10:59 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.209 07:10:59 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.209 07:10:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.209 ************************************ 00:05:27.209 START TEST guess_driver 00:05:27.209 ************************************ 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:27.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:27.209 Looking for driver=vfio-pci 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.209 07:10:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:28.584 07:11:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.522 07:11:01 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.052 00:05:32.052 real 0m4.802s 00:05:32.052 user 0m1.133s 00:05:32.052 sys 0m1.831s 00:05:32.052 07:11:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.052 07:11:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:32.052 ************************************ 00:05:32.052 END TEST guess_driver 00:05:32.052 ************************************ 00:05:32.052 00:05:32.052 real 0m7.424s 00:05:32.052 user 0m1.752s 00:05:32.052 sys 0m2.867s 00:05:32.052 07:11:04 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.052 07:11:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:32.052 ************************************ 00:05:32.052 END TEST driver 00:05:32.052 ************************************ 00:05:32.052 07:11:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:32.052 07:11:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.052 07:11:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.052 07:11:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:32.052 ************************************ 00:05:32.052 START TEST devices 00:05:32.052 ************************************ 00:05:32.052 07:11:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:32.052 * Looking for test storage... 00:05:32.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:32.052 07:11:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:32.052 07:11:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:32.052 07:11:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.052 07:11:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.422 07:11:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:33.422 07:11:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:33.422 07:11:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:33.422 07:11:05 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:33.679 No valid GPT data, bailing 00:05:33.679 07:11:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:33.679 07:11:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:33.679 07:11:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:33.679 07:11:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:33.679 07:11:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:33.679 07:11:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:33.679 07:11:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:33.679 07:11:05 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.679 07:11:05 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.679 07:11:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.679 ************************************ 00:05:33.679 START TEST nvme_mount 00:05:33.679 ************************************ 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:33.679 07:11:06 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:34.610 Creating new GPT entries in memory. 00:05:34.611 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:34.611 other utilities. 00:05:34.611 07:11:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:34.611 07:11:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.611 07:11:07 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.611 07:11:07 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.611 07:11:07 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:35.540 Creating new GPT entries in memory. 00:05:35.540 The operation has completed successfully. 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2340516 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:35.540 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:35.797 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.798 07:11:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:36.729 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:36.988 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.988 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.246 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:37.246 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:37.246 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:37.246 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.246 07:11:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:38.180 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.180 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:38.180 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:38.180 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.180 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.181 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.439 07:11:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:39.882 07:11:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.882 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:39.882 00:05:39.882 real 0m6.131s 00:05:39.882 user 0m1.407s 00:05:39.882 sys 0m2.304s 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.882 07:11:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:39.882 ************************************ 00:05:39.882 END TEST nvme_mount 00:05:39.882 ************************************ 00:05:39.882 07:11:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:39.882 07:11:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.882 07:11:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.882 07:11:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.882 ************************************ 00:05:39.882 START TEST dm_mount 00:05:39.882 ************************************ 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.882 07:11:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:40.815 Creating new GPT entries in memory. 00:05:40.815 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.815 other utilities. 00:05:40.815 07:11:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.815 07:11:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.815 07:11:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.815 07:11:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.815 07:11:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:41.750 Creating new GPT entries in memory. 00:05:41.750 The operation has completed successfully. 00:05:41.750 07:11:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.750 07:11:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.750 07:11:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.750 07:11:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.750 07:11:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:43.123 The operation has completed successfully. 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2342904 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.123 07:11:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:44.057 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.315 07:11:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:45.249 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:45.508 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:45.508 00:05:45.508 real 0m5.782s 00:05:45.508 user 0m0.934s 00:05:45.508 sys 0m1.703s 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.508 07:11:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:45.508 ************************************ 00:05:45.508 END TEST dm_mount 00:05:45.508 ************************************ 00:05:45.508 07:11:17 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:45.508 07:11:17 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:45.508 07:11:17 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.508 07:11:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.508 07:11:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:45.508 07:11:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.508 07:11:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.766 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:45.766 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:45.766 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:45.766 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.766 07:11:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:45.766 00:05:45.766 real 0m13.819s 00:05:45.766 user 0m3.016s 00:05:45.766 sys 0m5.005s 00:05:45.766 07:11:18 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.766 07:11:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:45.766 ************************************ 00:05:45.766 END TEST devices 00:05:45.766 ************************************ 00:05:46.024 00:05:46.024 real 0m43.326s 00:05:46.024 user 0m12.329s 00:05:46.024 sys 0m19.317s 00:05:46.024 07:11:18 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.024 07:11:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:46.024 ************************************ 00:05:46.024 END TEST setup.sh 00:05:46.024 ************************************ 00:05:46.024 07:11:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:46.957 Hugepages 00:05:46.957 node hugesize free / total 00:05:46.957 node0 1048576kB 0 / 0 00:05:46.957 node0 2048kB 2048 / 2048 00:05:46.957 node1 1048576kB 0 / 0 00:05:46.957 node1 2048kB 0 / 0 00:05:46.957 00:05:46.957 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.957 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:46.957 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:46.957 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:47.215 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:47.215 07:11:19 -- spdk/autotest.sh@130 -- # uname -s 00:05:47.215 07:11:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:47.215 07:11:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:47.215 07:11:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:48.148 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:48.148 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:48.148 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:48.148 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:48.148 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:48.406 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:48.406 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:48.406 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:48.406 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:49.339 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:49.339 07:11:21 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:50.713 07:11:22 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:50.713 07:11:22 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:50.713 07:11:22 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:50.713 07:11:22 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:50.713 07:11:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:50.713 07:11:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:50.713 07:11:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.713 07:11:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:50.713 07:11:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:50.713 07:11:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:50.713 07:11:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:50.713 07:11:22 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:51.647 Waiting for block devices as requested 00:05:51.647 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:51.647 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:51.647 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:51.905 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:51.905 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:51.905 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:52.163 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:52.163 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:52.163 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:52.163 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:52.421 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:52.421 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:52.421 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:52.421 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:52.680 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:52.680 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:52.680 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:52.943 07:11:25 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:52.943 07:11:25 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:52.943 07:11:25 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:52.943 07:11:25 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:52.943 07:11:25 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:52.943 07:11:25 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:52.943 07:11:25 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:52.943 07:11:25 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:52.943 07:11:25 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:52.943 07:11:25 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:52.943 07:11:25 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:52.943 07:11:25 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:52.943 07:11:25 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:52.943 07:11:25 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:52.943 07:11:25 -- common/autotest_common.sh@1557 -- # continue 00:05:52.943 07:11:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:52.943 07:11:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.943 07:11:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.943 07:11:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:52.943 07:11:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.943 07:11:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.943 07:11:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:54.317 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:54.317 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:54.317 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.250 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:55.250 07:11:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:55.250 07:11:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.250 07:11:27 -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 07:11:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:55.250 07:11:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:55.250 07:11:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:55.250 07:11:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:55.250 07:11:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:55.250 07:11:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:55.250 07:11:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:55.250 07:11:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:55.250 07:11:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:55.250 07:11:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:55.250 07:11:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:55.250 07:11:27 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:55.250 07:11:27 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:55.250 07:11:27 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:55.250 07:11:27 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:55.250 07:11:27 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:55.250 07:11:27 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:55.250 07:11:27 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:55.250 07:11:27 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:55.250 07:11:27 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:55.250 07:11:27 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2348086 00:05:55.250 07:11:27 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.250 07:11:27 -- common/autotest_common.sh@1598 -- # waitforlisten 2348086 00:05:55.250 07:11:27 -- common/autotest_common.sh@831 -- # '[' -z 2348086 ']' 00:05:55.250 07:11:27 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.250 07:11:27 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.250 07:11:27 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.250 07:11:27 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.250 07:11:27 -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 [2024-07-25 07:11:27.753462] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:05:55.250 [2024-07-25 07:11:27.753571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348086 ] 00:05:55.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.508 [2024-07-25 07:11:27.815325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.508 [2024-07-25 07:11:27.932674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.442 07:11:28 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.442 07:11:28 -- common/autotest_common.sh@864 -- # return 0 00:05:56.442 07:11:28 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:56.442 07:11:28 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:56.442 07:11:28 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:59.721 nvme0n1 00:05:59.721 07:11:31 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:59.721 [2024-07-25 07:11:32.016636] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:59.721 [2024-07-25 07:11:32.016687] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:59.721 request: 00:05:59.721 { 00:05:59.721 "nvme_ctrlr_name": "nvme0", 00:05:59.721 "password": "test", 00:05:59.721 "method": "bdev_nvme_opal_revert", 00:05:59.721 "req_id": 1 00:05:59.721 } 00:05:59.721 Got JSON-RPC error response 00:05:59.721 response: 00:05:59.721 { 00:05:59.721 "code": -32603, 00:05:59.721 "message": "Internal error" 00:05:59.721 } 00:05:59.721 07:11:32 -- common/autotest_common.sh@1604 -- # true 00:05:59.721 07:11:32 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:59.721 07:11:32 -- common/autotest_common.sh@1608 -- # killprocess 2348086 00:05:59.721 07:11:32 -- common/autotest_common.sh@950 -- # '[' -z 2348086 ']' 00:05:59.721 07:11:32 -- common/autotest_common.sh@954 -- # kill -0 2348086 00:05:59.721 07:11:32 -- common/autotest_common.sh@955 -- # uname 00:05:59.721 07:11:32 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.721 07:11:32 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2348086 00:05:59.721 07:11:32 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.721 07:11:32 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.721 07:11:32 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2348086' 00:05:59.721 killing process with pid 2348086 00:05:59.721 07:11:32 -- common/autotest_common.sh@969 -- # kill 2348086 00:05:59.721 07:11:32 -- common/autotest_common.sh@974 -- # wait 2348086 00:06:01.617 07:11:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:01.617 07:11:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:01.617 07:11:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:01.617 07:11:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:01.617 07:11:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:01.617 07:11:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.617 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 07:11:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:01.617 07:11:33 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:01.617 07:11:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.617 07:11:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.617 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 ************************************ 00:06:01.617 START TEST env 00:06:01.617 ************************************ 00:06:01.617 07:11:33 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:01.617 * Looking for test storage... 00:06:01.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:01.617 07:11:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.617 07:11:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.617 07:11:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.617 07:11:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 ************************************ 00:06:01.617 START TEST env_memory 00:06:01.617 ************************************ 00:06:01.617 07:11:33 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.617 00:06:01.617 00:06:01.617 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.617 http://cunit.sourceforge.net/ 00:06:01.617 00:06:01.617 00:06:01.617 Suite: memory 00:06:01.617 Test: alloc and free memory map ...[2024-07-25 07:11:34.007318] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:01.617 passed 00:06:01.617 Test: mem map translation ...[2024-07-25 07:11:34.028096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:01.617 [2024-07-25 07:11:34.028117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:01.617 [2024-07-25 07:11:34.028173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:01.617 [2024-07-25 07:11:34.028185] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:01.617 passed 00:06:01.617 Test: mem map registration ...[2024-07-25 07:11:34.070079] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:01.617 [2024-07-25 07:11:34.070098] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:01.617 passed 00:06:01.617 Test: mem map adjacent registrations ...passed 00:06:01.617 00:06:01.617 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.617 suites 1 1 n/a 0 0 00:06:01.617 tests 4 4 4 0 0 00:06:01.617 asserts 152 152 152 0 n/a 00:06:01.617 00:06:01.617 Elapsed time = 0.145 seconds 00:06:01.617 00:06:01.617 real 0m0.153s 00:06:01.617 user 0m0.144s 00:06:01.617 sys 0m0.009s 00:06:01.617 07:11:34 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.618 07:11:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:01.618 ************************************ 00:06:01.618 END TEST env_memory 00:06:01.618 ************************************ 00:06:01.877 07:11:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.877 07:11:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.877 07:11:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.877 07:11:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.877 ************************************ 00:06:01.877 START TEST env_vtophys 00:06:01.877 ************************************ 00:06:01.877 07:11:34 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.877 EAL: lib.eal log level changed from notice to debug 00:06:01.877 EAL: Detected lcore 0 as core 0 on socket 0 00:06:01.877 EAL: Detected lcore 1 as core 1 on socket 0 00:06:01.877 EAL: Detected lcore 2 as core 2 on socket 0 00:06:01.877 EAL: Detected lcore 3 as core 3 on socket 0 00:06:01.877 EAL: Detected lcore 4 as core 4 on socket 0 00:06:01.877 EAL: Detected lcore 5 as core 5 on socket 0 00:06:01.877 EAL: Detected lcore 6 as core 8 on socket 0 00:06:01.877 EAL: Detected lcore 7 as core 9 on socket 0 00:06:01.877 EAL: Detected lcore 8 as core 10 on socket 0 00:06:01.877 EAL: Detected lcore 9 as core 11 on socket 0 00:06:01.877 EAL: Detected lcore 10 as core 12 on socket 0 00:06:01.877 EAL: Detected lcore 11 as core 13 on socket 0 00:06:01.877 EAL: Detected lcore 12 as core 0 on socket 1 00:06:01.877 EAL: Detected lcore 13 as core 1 on socket 1 00:06:01.877 EAL: Detected lcore 14 as core 2 on socket 1 00:06:01.877 EAL: Detected lcore 15 as core 3 on socket 1 00:06:01.877 EAL: Detected lcore 16 as core 4 on socket 1 00:06:01.877 EAL: Detected lcore 17 as core 5 on socket 1 00:06:01.877 EAL: Detected lcore 18 as core 8 on socket 1 00:06:01.877 EAL: Detected lcore 19 as core 9 on socket 1 00:06:01.877 EAL: Detected lcore 20 as core 10 on socket 1 00:06:01.877 EAL: Detected lcore 21 as core 11 on socket 1 00:06:01.877 EAL: Detected lcore 22 as core 12 on socket 1 00:06:01.877 EAL: Detected lcore 23 as core 13 on socket 1 00:06:01.877 EAL: Detected lcore 24 as core 0 on socket 0 00:06:01.878 EAL: Detected lcore 25 as core 1 on socket 0 00:06:01.878 EAL: Detected lcore 26 as core 2 on socket 0 00:06:01.878 EAL: Detected lcore 27 as core 3 on socket 0 00:06:01.878 EAL: Detected lcore 28 as core 4 on socket 0 00:06:01.878 EAL: Detected lcore 29 as core 5 on socket 0 00:06:01.878 EAL: Detected lcore 30 as core 8 on socket 0 00:06:01.878 EAL: Detected lcore 31 as core 9 on socket 0 00:06:01.878 EAL: Detected lcore 32 as core 10 on socket 0 00:06:01.878 EAL: Detected lcore 33 as core 11 on socket 0 00:06:01.878 EAL: Detected lcore 34 as core 12 on socket 0 00:06:01.878 EAL: Detected lcore 35 as core 13 on socket 0 00:06:01.878 EAL: Detected lcore 36 as core 0 on socket 1 00:06:01.878 EAL: Detected lcore 37 as core 1 on socket 1 00:06:01.878 EAL: Detected lcore 38 as core 2 on socket 1 00:06:01.878 EAL: Detected lcore 39 as core 3 on socket 1 00:06:01.878 EAL: Detected lcore 40 as core 4 on socket 1 00:06:01.878 EAL: Detected lcore 41 as core 5 on socket 1 00:06:01.878 EAL: Detected lcore 42 as core 8 on socket 1 00:06:01.878 EAL: Detected lcore 43 as core 9 on socket 1 00:06:01.878 EAL: Detected lcore 44 as core 10 on socket 1 00:06:01.878 EAL: Detected lcore 45 as core 11 on socket 1 00:06:01.878 EAL: Detected lcore 46 as core 12 on socket 1 00:06:01.878 EAL: Detected lcore 47 as core 13 on socket 1 00:06:01.878 EAL: Maximum logical cores by configuration: 128 00:06:01.878 EAL: Detected CPU lcores: 48 00:06:01.878 EAL: Detected NUMA nodes: 2 00:06:01.878 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:01.878 EAL: Detected shared linkage of DPDK 00:06:01.878 EAL: No shared files mode enabled, IPC will be disabled 00:06:01.878 EAL: Bus pci wants IOVA as 'DC' 00:06:01.878 EAL: Buses did not request a specific IOVA mode. 00:06:01.878 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:01.878 EAL: Selected IOVA mode 'VA' 00:06:01.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.878 EAL: Probing VFIO support... 00:06:01.878 EAL: IOMMU type 1 (Type 1) is supported 00:06:01.878 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:01.878 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:01.878 EAL: VFIO support initialized 00:06:01.878 EAL: Ask a virtual area of 0x2e000 bytes 00:06:01.878 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:01.878 EAL: Setting up physically contiguous memory... 00:06:01.878 EAL: Setting maximum number of open files to 524288 00:06:01.878 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:01.878 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:01.878 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:01.878 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:01.878 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.878 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:01.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.878 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.878 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:01.878 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:01.878 EAL: Hugepages will be freed exactly as allocated. 00:06:01.878 EAL: No shared files mode enabled, IPC is disabled 00:06:01.878 EAL: No shared files mode enabled, IPC is disabled 00:06:01.878 EAL: TSC frequency is ~2700000 KHz 00:06:01.878 EAL: Main lcore 0 is ready (tid=7f587af4aa00;cpuset=[0]) 00:06:01.878 EAL: Trying to obtain current memory policy. 00:06:01.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.878 EAL: Restoring previous memory policy: 0 00:06:01.878 EAL: request: mp_malloc_sync 00:06:01.878 EAL: No shared files mode enabled, IPC is disabled 00:06:01.878 EAL: Heap on socket 0 was expanded by 2MB 00:06:01.878 EAL: No shared files mode enabled, IPC is disabled 00:06:01.878 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:01.878 EAL: Mem event callback 'spdk:(nil)' registered 00:06:01.878 00:06:01.878 00:06:01.878 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.878 http://cunit.sourceforge.net/ 00:06:01.878 00:06:01.878 00:06:01.878 Suite: components_suite 00:06:01.878 Test: vtophys_malloc_test ...passed 00:06:01.878 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:01.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.878 EAL: Restoring previous memory policy: 4 00:06:01.878 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.878 EAL: request: mp_malloc_sync 00:06:01.878 EAL: No shared files mode enabled, IPC is disabled 00:06:01.878 EAL: Heap on socket 0 was expanded by 4MB 00:06:01.878 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.878 EAL: request: mp_malloc_sync 00:06:01.878 EAL: No shared files mode enabled, IPC is disabled 00:06:01.878 EAL: Heap on socket 0 was shrunk by 4MB 00:06:01.878 EAL: Trying to obtain current memory policy. 00:06:01.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.879 EAL: Restoring previous memory policy: 4 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was expanded by 6MB 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was shrunk by 6MB 00:06:01.879 EAL: Trying to obtain current memory policy. 00:06:01.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.879 EAL: Restoring previous memory policy: 4 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was expanded by 10MB 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was shrunk by 10MB 00:06:01.879 EAL: Trying to obtain current memory policy. 00:06:01.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.879 EAL: Restoring previous memory policy: 4 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was expanded by 18MB 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was shrunk by 18MB 00:06:01.879 EAL: Trying to obtain current memory policy. 00:06:01.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.879 EAL: Restoring previous memory policy: 4 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was expanded by 34MB 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was shrunk by 34MB 00:06:01.879 EAL: Trying to obtain current memory policy. 00:06:01.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.879 EAL: Restoring previous memory policy: 4 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was expanded by 66MB 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was shrunk by 66MB 00:06:01.879 EAL: Trying to obtain current memory policy. 00:06:01.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.879 EAL: Restoring previous memory policy: 4 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.879 EAL: request: mp_malloc_sync 00:06:01.879 EAL: No shared files mode enabled, IPC is disabled 00:06:01.879 EAL: Heap on socket 0 was expanded by 130MB 00:06:01.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.137 EAL: request: mp_malloc_sync 00:06:02.137 EAL: No shared files mode enabled, IPC is disabled 00:06:02.137 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.137 EAL: Trying to obtain current memory policy. 00:06:02.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.137 EAL: Restoring previous memory policy: 4 00:06:02.137 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.137 EAL: request: mp_malloc_sync 00:06:02.137 EAL: No shared files mode enabled, IPC is disabled 00:06:02.137 EAL: Heap on socket 0 was expanded by 258MB 00:06:02.137 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.137 EAL: request: mp_malloc_sync 00:06:02.137 EAL: No shared files mode enabled, IPC is disabled 00:06:02.137 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.137 EAL: Trying to obtain current memory policy. 00:06:02.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.394 EAL: Restoring previous memory policy: 4 00:06:02.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.394 EAL: request: mp_malloc_sync 00:06:02.394 EAL: No shared files mode enabled, IPC is disabled 00:06:02.394 EAL: Heap on socket 0 was expanded by 514MB 00:06:02.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.651 EAL: request: mp_malloc_sync 00:06:02.651 EAL: No shared files mode enabled, IPC is disabled 00:06:02.651 EAL: Heap on socket 0 was shrunk by 514MB 00:06:02.651 EAL: Trying to obtain current memory policy. 00:06:02.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.908 EAL: Restoring previous memory policy: 4 00:06:02.908 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.908 EAL: request: mp_malloc_sync 00:06:02.908 EAL: No shared files mode enabled, IPC is disabled 00:06:02.908 EAL: Heap on socket 0 was expanded by 1026MB 00:06:02.908 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.166 EAL: request: mp_malloc_sync 00:06:03.166 EAL: No shared files mode enabled, IPC is disabled 00:06:03.166 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.166 passed 00:06:03.166 00:06:03.166 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.166 suites 1 1 n/a 0 0 00:06:03.166 tests 2 2 2 0 0 00:06:03.166 asserts 497 497 497 0 n/a 00:06:03.166 00:06:03.166 Elapsed time = 1.372 seconds 00:06:03.166 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.166 EAL: request: mp_malloc_sync 00:06:03.166 EAL: No shared files mode enabled, IPC is disabled 00:06:03.166 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.166 EAL: No shared files mode enabled, IPC is disabled 00:06:03.166 EAL: No shared files mode enabled, IPC is disabled 00:06:03.166 EAL: No shared files mode enabled, IPC is disabled 00:06:03.166 00:06:03.166 real 0m1.488s 00:06:03.166 user 0m0.856s 00:06:03.166 sys 0m0.602s 00:06:03.166 07:11:35 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.166 07:11:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:03.166 ************************************ 00:06:03.166 END TEST env_vtophys 00:06:03.166 ************************************ 00:06:03.166 07:11:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.166 07:11:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.166 07:11:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.166 07:11:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.423 ************************************ 00:06:03.423 START TEST env_pci 00:06:03.423 ************************************ 00:06:03.423 07:11:35 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.423 00:06:03.423 00:06:03.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.423 http://cunit.sourceforge.net/ 00:06:03.423 00:06:03.423 00:06:03.423 Suite: pci 00:06:03.423 Test: pci_hook ...[2024-07-25 07:11:35.716874] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2349100 has claimed it 00:06:03.424 EAL: Cannot find device (10000:00:01.0) 00:06:03.424 EAL: Failed to attach device on primary process 00:06:03.424 passed 00:06:03.424 00:06:03.424 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.424 suites 1 1 n/a 0 0 00:06:03.424 tests 1 1 1 0 0 00:06:03.424 asserts 25 25 25 0 n/a 00:06:03.424 00:06:03.424 Elapsed time = 0.022 seconds 00:06:03.424 00:06:03.424 real 0m0.034s 00:06:03.424 user 0m0.008s 00:06:03.424 sys 0m0.026s 00:06:03.424 07:11:35 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.424 07:11:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 ************************************ 00:06:03.424 END TEST env_pci 00:06:03.424 ************************************ 00:06:03.424 07:11:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:03.424 07:11:35 env -- env/env.sh@15 -- # uname 00:06:03.424 07:11:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:03.424 07:11:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:03.424 07:11:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.424 07:11:35 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:03.424 07:11:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.424 07:11:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 ************************************ 00:06:03.424 START TEST env_dpdk_post_init 00:06:03.424 ************************************ 00:06:03.424 07:11:35 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.424 EAL: Detected CPU lcores: 48 00:06:03.424 EAL: Detected NUMA nodes: 2 00:06:03.424 EAL: Detected shared linkage of DPDK 00:06:03.424 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.424 EAL: Selected IOVA mode 'VA' 00:06:03.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.424 EAL: VFIO support initialized 00:06:03.424 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.424 EAL: Using IOMMU type 1 (Type 1) 00:06:03.424 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:03.424 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:03.424 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:03.424 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:03.681 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:04.612 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:07.890 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:07.890 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:07.890 Starting DPDK initialization... 00:06:07.890 Starting SPDK post initialization... 00:06:07.890 SPDK NVMe probe 00:06:07.890 Attaching to 0000:88:00.0 00:06:07.890 Attached to 0000:88:00.0 00:06:07.890 Cleaning up... 00:06:07.890 00:06:07.890 real 0m4.399s 00:06:07.890 user 0m3.273s 00:06:07.890 sys 0m0.181s 00:06:07.890 07:11:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.890 07:11:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.890 ************************************ 00:06:07.890 END TEST env_dpdk_post_init 00:06:07.890 ************************************ 00:06:07.890 07:11:40 env -- env/env.sh@26 -- # uname 00:06:07.890 07:11:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:07.890 07:11:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:07.890 07:11:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.890 07:11:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.890 07:11:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.890 ************************************ 00:06:07.890 START TEST env_mem_callbacks 00:06:07.890 ************************************ 00:06:07.890 07:11:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:07.890 EAL: Detected CPU lcores: 48 00:06:07.890 EAL: Detected NUMA nodes: 2 00:06:07.890 EAL: Detected shared linkage of DPDK 00:06:07.890 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:07.890 EAL: Selected IOVA mode 'VA' 00:06:07.890 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.890 EAL: VFIO support initialized 00:06:07.890 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:07.890 00:06:07.890 00:06:07.890 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.890 http://cunit.sourceforge.net/ 00:06:07.890 00:06:07.890 00:06:07.890 Suite: memory 00:06:07.890 Test: test ... 00:06:07.890 register 0x200000200000 2097152 00:06:07.890 malloc 3145728 00:06:07.890 register 0x200000400000 4194304 00:06:07.890 buf 0x200000500000 len 3145728 PASSED 00:06:07.890 malloc 64 00:06:07.890 buf 0x2000004fff40 len 64 PASSED 00:06:07.890 malloc 4194304 00:06:07.890 register 0x200000800000 6291456 00:06:07.890 buf 0x200000a00000 len 4194304 PASSED 00:06:07.890 free 0x200000500000 3145728 00:06:07.890 free 0x2000004fff40 64 00:06:07.890 unregister 0x200000400000 4194304 PASSED 00:06:07.890 free 0x200000a00000 4194304 00:06:07.890 unregister 0x200000800000 6291456 PASSED 00:06:07.890 malloc 8388608 00:06:07.890 register 0x200000400000 10485760 00:06:07.890 buf 0x200000600000 len 8388608 PASSED 00:06:07.890 free 0x200000600000 8388608 00:06:07.890 unregister 0x200000400000 10485760 PASSED 00:06:07.890 passed 00:06:07.890 00:06:07.890 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.890 suites 1 1 n/a 0 0 00:06:07.890 tests 1 1 1 0 0 00:06:07.890 asserts 15 15 15 0 n/a 00:06:07.890 00:06:07.890 Elapsed time = 0.005 seconds 00:06:07.890 00:06:07.890 real 0m0.048s 00:06:07.890 user 0m0.013s 00:06:07.890 sys 0m0.035s 00:06:07.890 07:11:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.890 07:11:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:07.890 ************************************ 00:06:07.890 END TEST env_mem_callbacks 00:06:07.890 ************************************ 00:06:07.890 00:06:07.890 real 0m6.417s 00:06:07.890 user 0m4.426s 00:06:07.890 sys 0m1.033s 00:06:07.890 07:11:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.890 07:11:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.890 ************************************ 00:06:07.890 END TEST env 00:06:07.890 ************************************ 00:06:07.890 07:11:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:07.890 07:11:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.890 07:11:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.890 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.890 ************************************ 00:06:07.890 START TEST rpc 00:06:07.890 ************************************ 00:06:07.890 07:11:40 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:07.890 * Looking for test storage... 00:06:07.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:07.890 07:11:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2349758 00:06:07.890 07:11:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:07.891 07:11:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.891 07:11:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2349758 00:06:07.891 07:11:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 2349758 ']' 00:06:07.891 07:11:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.891 07:11:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.891 07:11:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.891 07:11:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.891 07:11:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.149 [2024-07-25 07:11:40.457796] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:08.149 [2024-07-25 07:11:40.457890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349758 ] 00:06:08.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.149 [2024-07-25 07:11:40.514925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.149 [2024-07-25 07:11:40.621619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:08.149 [2024-07-25 07:11:40.621674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2349758' to capture a snapshot of events at runtime. 00:06:08.149 [2024-07-25 07:11:40.621703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.149 [2024-07-25 07:11:40.621714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.149 [2024-07-25 07:11:40.621724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2349758 for offline analysis/debug. 00:06:08.149 [2024-07-25 07:11:40.621752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.408 07:11:40 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.408 07:11:40 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.408 07:11:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.408 07:11:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.408 07:11:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:08.408 07:11:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:08.408 07:11:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.408 07:11:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.408 07:11:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.408 ************************************ 00:06:08.408 START TEST rpc_integrity 00:06:08.408 ************************************ 00:06:08.408 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:08.408 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.408 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.408 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.408 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.408 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.408 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.666 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.666 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.666 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.666 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.666 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.666 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:08.666 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.666 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.666 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.666 07:11:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.666 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.666 { 00:06:08.666 "name": "Malloc0", 00:06:08.666 "aliases": [ 00:06:08.666 "c872f327-5819-4aea-913d-27e957aa9f7b" 00:06:08.666 ], 00:06:08.666 "product_name": "Malloc disk", 00:06:08.666 "block_size": 512, 00:06:08.666 "num_blocks": 16384, 00:06:08.666 "uuid": "c872f327-5819-4aea-913d-27e957aa9f7b", 00:06:08.666 "assigned_rate_limits": { 00:06:08.666 "rw_ios_per_sec": 0, 00:06:08.666 "rw_mbytes_per_sec": 0, 00:06:08.666 "r_mbytes_per_sec": 0, 00:06:08.666 "w_mbytes_per_sec": 0 00:06:08.666 }, 00:06:08.666 "claimed": false, 00:06:08.666 "zoned": false, 00:06:08.666 "supported_io_types": { 00:06:08.666 "read": true, 00:06:08.666 "write": true, 00:06:08.666 "unmap": true, 00:06:08.666 "flush": true, 00:06:08.666 "reset": true, 00:06:08.666 "nvme_admin": false, 00:06:08.666 "nvme_io": false, 00:06:08.666 "nvme_io_md": false, 00:06:08.666 "write_zeroes": true, 00:06:08.666 "zcopy": true, 00:06:08.666 "get_zone_info": false, 00:06:08.666 "zone_management": false, 00:06:08.666 "zone_append": false, 00:06:08.666 "compare": false, 00:06:08.666 "compare_and_write": false, 00:06:08.666 "abort": true, 00:06:08.666 "seek_hole": false, 00:06:08.666 "seek_data": false, 00:06:08.666 "copy": true, 00:06:08.666 "nvme_iov_md": false 00:06:08.666 }, 00:06:08.666 "memory_domains": [ 00:06:08.666 { 00:06:08.666 "dma_device_id": "system", 00:06:08.666 "dma_device_type": 1 00:06:08.666 }, 00:06:08.666 { 00:06:08.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.666 "dma_device_type": 2 00:06:08.666 } 00:06:08.666 ], 00:06:08.666 "driver_specific": {} 00:06:08.666 } 00:06:08.666 ]' 00:06:08.666 07:11:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.666 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.666 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:08.666 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.666 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.666 [2024-07-25 07:11:41.014787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:08.666 [2024-07-25 07:11:41.014831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.666 [2024-07-25 07:11:41.014856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd95d70 00:06:08.666 [2024-07-25 07:11:41.014872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.666 [2024-07-25 07:11:41.016425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.666 [2024-07-25 07:11:41.016451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.666 Passthru0 00:06:08.666 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.666 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.666 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.666 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.666 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.666 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.666 { 00:06:08.666 "name": "Malloc0", 00:06:08.666 "aliases": [ 00:06:08.666 "c872f327-5819-4aea-913d-27e957aa9f7b" 00:06:08.666 ], 00:06:08.666 "product_name": "Malloc disk", 00:06:08.666 "block_size": 512, 00:06:08.666 "num_blocks": 16384, 00:06:08.666 "uuid": "c872f327-5819-4aea-913d-27e957aa9f7b", 00:06:08.666 "assigned_rate_limits": { 00:06:08.666 "rw_ios_per_sec": 0, 00:06:08.666 "rw_mbytes_per_sec": 0, 00:06:08.666 "r_mbytes_per_sec": 0, 00:06:08.666 "w_mbytes_per_sec": 0 00:06:08.666 }, 00:06:08.666 "claimed": true, 00:06:08.666 "claim_type": "exclusive_write", 00:06:08.666 "zoned": false, 00:06:08.666 "supported_io_types": { 00:06:08.666 "read": true, 00:06:08.666 "write": true, 00:06:08.666 "unmap": true, 00:06:08.666 "flush": true, 00:06:08.666 "reset": true, 00:06:08.666 "nvme_admin": false, 00:06:08.666 "nvme_io": false, 00:06:08.666 "nvme_io_md": false, 00:06:08.666 "write_zeroes": true, 00:06:08.666 "zcopy": true, 00:06:08.666 "get_zone_info": false, 00:06:08.666 "zone_management": false, 00:06:08.666 "zone_append": false, 00:06:08.666 "compare": false, 00:06:08.666 "compare_and_write": false, 00:06:08.666 "abort": true, 00:06:08.666 "seek_hole": false, 00:06:08.666 "seek_data": false, 00:06:08.666 "copy": true, 00:06:08.666 "nvme_iov_md": false 00:06:08.666 }, 00:06:08.666 "memory_domains": [ 00:06:08.666 { 00:06:08.666 "dma_device_id": "system", 00:06:08.666 "dma_device_type": 1 00:06:08.666 }, 00:06:08.666 { 00:06:08.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.666 "dma_device_type": 2 00:06:08.666 } 00:06:08.666 ], 00:06:08.666 "driver_specific": {} 00:06:08.666 }, 00:06:08.666 { 00:06:08.666 "name": "Passthru0", 00:06:08.666 "aliases": [ 00:06:08.666 "22f398e1-efc2-59c9-986b-aa02a5a4e642" 00:06:08.666 ], 00:06:08.666 "product_name": "passthru", 00:06:08.666 "block_size": 512, 00:06:08.666 "num_blocks": 16384, 00:06:08.666 "uuid": "22f398e1-efc2-59c9-986b-aa02a5a4e642", 00:06:08.666 "assigned_rate_limits": { 00:06:08.666 "rw_ios_per_sec": 0, 00:06:08.666 "rw_mbytes_per_sec": 0, 00:06:08.666 "r_mbytes_per_sec": 0, 00:06:08.666 "w_mbytes_per_sec": 0 00:06:08.666 }, 00:06:08.666 "claimed": false, 00:06:08.666 "zoned": false, 00:06:08.667 "supported_io_types": { 00:06:08.667 "read": true, 00:06:08.667 "write": true, 00:06:08.667 "unmap": true, 00:06:08.667 "flush": true, 00:06:08.667 "reset": true, 00:06:08.667 "nvme_admin": false, 00:06:08.667 "nvme_io": false, 00:06:08.667 "nvme_io_md": false, 00:06:08.667 "write_zeroes": true, 00:06:08.667 "zcopy": true, 00:06:08.667 "get_zone_info": false, 00:06:08.667 "zone_management": false, 00:06:08.667 "zone_append": false, 00:06:08.667 "compare": false, 00:06:08.667 "compare_and_write": false, 00:06:08.667 "abort": true, 00:06:08.667 "seek_hole": false, 00:06:08.667 "seek_data": false, 00:06:08.667 "copy": true, 00:06:08.667 "nvme_iov_md": false 00:06:08.667 }, 00:06:08.667 "memory_domains": [ 00:06:08.667 { 00:06:08.667 "dma_device_id": "system", 00:06:08.667 "dma_device_type": 1 00:06:08.667 }, 00:06:08.667 { 00:06:08.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.667 "dma_device_type": 2 00:06:08.667 } 00:06:08.667 ], 00:06:08.667 "driver_specific": { 00:06:08.667 "passthru": { 00:06:08.667 "name": "Passthru0", 00:06:08.667 "base_bdev_name": "Malloc0" 00:06:08.667 } 00:06:08.667 } 00:06:08.667 } 00:06:08.667 ]' 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.667 07:11:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.667 00:06:08.667 real 0m0.228s 00:06:08.667 user 0m0.155s 00:06:08.667 sys 0m0.017s 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.667 07:11:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.667 ************************************ 00:06:08.667 END TEST rpc_integrity 00:06:08.667 ************************************ 00:06:08.667 07:11:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:08.667 07:11:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.667 07:11:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.667 07:11:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.667 ************************************ 00:06:08.667 START TEST rpc_plugins 00:06:08.667 ************************************ 00:06:08.667 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:08.667 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:08.667 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.667 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.667 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.667 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:08.667 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:08.667 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.667 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:08.953 { 00:06:08.953 "name": "Malloc1", 00:06:08.953 "aliases": [ 00:06:08.953 "a13df146-6e5e-4c31-a31d-0984f47d74d8" 00:06:08.953 ], 00:06:08.953 "product_name": "Malloc disk", 00:06:08.953 "block_size": 4096, 00:06:08.953 "num_blocks": 256, 00:06:08.953 "uuid": "a13df146-6e5e-4c31-a31d-0984f47d74d8", 00:06:08.953 "assigned_rate_limits": { 00:06:08.953 "rw_ios_per_sec": 0, 00:06:08.953 "rw_mbytes_per_sec": 0, 00:06:08.953 "r_mbytes_per_sec": 0, 00:06:08.953 "w_mbytes_per_sec": 0 00:06:08.953 }, 00:06:08.953 "claimed": false, 00:06:08.953 "zoned": false, 00:06:08.953 "supported_io_types": { 00:06:08.953 "read": true, 00:06:08.953 "write": true, 00:06:08.953 "unmap": true, 00:06:08.953 "flush": true, 00:06:08.953 "reset": true, 00:06:08.953 "nvme_admin": false, 00:06:08.953 "nvme_io": false, 00:06:08.953 "nvme_io_md": false, 00:06:08.953 "write_zeroes": true, 00:06:08.953 "zcopy": true, 00:06:08.953 "get_zone_info": false, 00:06:08.953 "zone_management": false, 00:06:08.953 "zone_append": false, 00:06:08.953 "compare": false, 00:06:08.953 "compare_and_write": false, 00:06:08.953 "abort": true, 00:06:08.953 "seek_hole": false, 00:06:08.953 "seek_data": false, 00:06:08.953 "copy": true, 00:06:08.953 "nvme_iov_md": false 00:06:08.953 }, 00:06:08.953 "memory_domains": [ 00:06:08.953 { 00:06:08.953 "dma_device_id": "system", 00:06:08.953 "dma_device_type": 1 00:06:08.953 }, 00:06:08.953 { 00:06:08.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.953 "dma_device_type": 2 00:06:08.953 } 00:06:08.953 ], 00:06:08.953 "driver_specific": {} 00:06:08.953 } 00:06:08.953 ]' 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:08.953 07:11:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:08.953 00:06:08.953 real 0m0.109s 00:06:08.953 user 0m0.074s 00:06:08.953 sys 0m0.008s 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.953 07:11:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 ************************************ 00:06:08.953 END TEST rpc_plugins 00:06:08.953 ************************************ 00:06:08.953 07:11:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:08.953 07:11:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.953 07:11:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.953 07:11:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 ************************************ 00:06:08.953 START TEST rpc_trace_cmd_test 00:06:08.953 ************************************ 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.953 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:08.953 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2349758", 00:06:08.953 "tpoint_group_mask": "0x8", 00:06:08.953 "iscsi_conn": { 00:06:08.953 "mask": "0x2", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "scsi": { 00:06:08.953 "mask": "0x4", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "bdev": { 00:06:08.953 "mask": "0x8", 00:06:08.953 "tpoint_mask": "0xffffffffffffffff" 00:06:08.953 }, 00:06:08.953 "nvmf_rdma": { 00:06:08.953 "mask": "0x10", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "nvmf_tcp": { 00:06:08.953 "mask": "0x20", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "ftl": { 00:06:08.953 "mask": "0x40", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "blobfs": { 00:06:08.953 "mask": "0x80", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "dsa": { 00:06:08.953 "mask": "0x200", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "thread": { 00:06:08.953 "mask": "0x400", 00:06:08.953 "tpoint_mask": "0x0" 00:06:08.953 }, 00:06:08.953 "nvme_pcie": { 00:06:08.953 "mask": "0x800", 00:06:08.954 "tpoint_mask": "0x0" 00:06:08.954 }, 00:06:08.954 "iaa": { 00:06:08.954 "mask": "0x1000", 00:06:08.954 "tpoint_mask": "0x0" 00:06:08.954 }, 00:06:08.954 "nvme_tcp": { 00:06:08.954 "mask": "0x2000", 00:06:08.954 "tpoint_mask": "0x0" 00:06:08.954 }, 00:06:08.954 "bdev_nvme": { 00:06:08.954 "mask": "0x4000", 00:06:08.954 "tpoint_mask": "0x0" 00:06:08.954 }, 00:06:08.954 "sock": { 00:06:08.954 "mask": "0x8000", 00:06:08.954 "tpoint_mask": "0x0" 00:06:08.954 } 00:06:08.954 }' 00:06:08.954 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:08.954 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:08.954 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:08.954 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:08.954 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.216 00:06:09.216 real 0m0.204s 00:06:09.216 user 0m0.173s 00:06:09.216 sys 0m0.022s 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.216 07:11:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 ************************************ 00:06:09.216 END TEST rpc_trace_cmd_test 00:06:09.216 ************************************ 00:06:09.216 07:11:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:09.216 07:11:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:09.216 07:11:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:09.216 07:11:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.216 07:11:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.216 07:11:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 ************************************ 00:06:09.216 START TEST rpc_daemon_integrity 00:06:09.216 ************************************ 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.216 { 00:06:09.216 "name": "Malloc2", 00:06:09.216 "aliases": [ 00:06:09.216 "5e1f1993-46f3-4470-9da1-d7e72feabb1f" 00:06:09.216 ], 00:06:09.216 "product_name": "Malloc disk", 00:06:09.216 "block_size": 512, 00:06:09.216 "num_blocks": 16384, 00:06:09.216 "uuid": "5e1f1993-46f3-4470-9da1-d7e72feabb1f", 00:06:09.216 "assigned_rate_limits": { 00:06:09.216 "rw_ios_per_sec": 0, 00:06:09.216 "rw_mbytes_per_sec": 0, 00:06:09.216 "r_mbytes_per_sec": 0, 00:06:09.216 "w_mbytes_per_sec": 0 00:06:09.216 }, 00:06:09.216 "claimed": false, 00:06:09.216 "zoned": false, 00:06:09.216 "supported_io_types": { 00:06:09.216 "read": true, 00:06:09.216 "write": true, 00:06:09.216 "unmap": true, 00:06:09.216 "flush": true, 00:06:09.216 "reset": true, 00:06:09.216 "nvme_admin": false, 00:06:09.216 "nvme_io": false, 00:06:09.216 "nvme_io_md": false, 00:06:09.216 "write_zeroes": true, 00:06:09.216 "zcopy": true, 00:06:09.216 "get_zone_info": false, 00:06:09.216 "zone_management": false, 00:06:09.216 "zone_append": false, 00:06:09.216 "compare": false, 00:06:09.216 "compare_and_write": false, 00:06:09.216 "abort": true, 00:06:09.216 "seek_hole": false, 00:06:09.216 "seek_data": false, 00:06:09.216 "copy": true, 00:06:09.216 "nvme_iov_md": false 00:06:09.216 }, 00:06:09.216 "memory_domains": [ 00:06:09.216 { 00:06:09.216 "dma_device_id": "system", 00:06:09.216 "dma_device_type": 1 00:06:09.216 }, 00:06:09.216 { 00:06:09.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.216 "dma_device_type": 2 00:06:09.216 } 00:06:09.216 ], 00:06:09.216 "driver_specific": {} 00:06:09.216 } 00:06:09.216 ]' 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 [2024-07-25 07:11:41.684945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:09.216 [2024-07-25 07:11:41.684988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.216 [2024-07-25 07:11:41.685022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd959a0 00:06:09.216 [2024-07-25 07:11:41.685040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.216 [2024-07-25 07:11:41.686415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.216 [2024-07-25 07:11:41.686440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.216 Passthru0 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.216 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.216 { 00:06:09.216 "name": "Malloc2", 00:06:09.216 "aliases": [ 00:06:09.216 "5e1f1993-46f3-4470-9da1-d7e72feabb1f" 00:06:09.216 ], 00:06:09.216 "product_name": "Malloc disk", 00:06:09.216 "block_size": 512, 00:06:09.216 "num_blocks": 16384, 00:06:09.216 "uuid": "5e1f1993-46f3-4470-9da1-d7e72feabb1f", 00:06:09.216 "assigned_rate_limits": { 00:06:09.216 "rw_ios_per_sec": 0, 00:06:09.216 "rw_mbytes_per_sec": 0, 00:06:09.216 "r_mbytes_per_sec": 0, 00:06:09.216 "w_mbytes_per_sec": 0 00:06:09.216 }, 00:06:09.216 "claimed": true, 00:06:09.216 "claim_type": "exclusive_write", 00:06:09.216 "zoned": false, 00:06:09.216 "supported_io_types": { 00:06:09.216 "read": true, 00:06:09.216 "write": true, 00:06:09.216 "unmap": true, 00:06:09.216 "flush": true, 00:06:09.216 "reset": true, 00:06:09.216 "nvme_admin": false, 00:06:09.216 "nvme_io": false, 00:06:09.216 "nvme_io_md": false, 00:06:09.216 "write_zeroes": true, 00:06:09.216 "zcopy": true, 00:06:09.216 "get_zone_info": false, 00:06:09.216 "zone_management": false, 00:06:09.216 "zone_append": false, 00:06:09.216 "compare": false, 00:06:09.216 "compare_and_write": false, 00:06:09.216 "abort": true, 00:06:09.216 "seek_hole": false, 00:06:09.216 "seek_data": false, 00:06:09.216 "copy": true, 00:06:09.216 "nvme_iov_md": false 00:06:09.216 }, 00:06:09.216 "memory_domains": [ 00:06:09.216 { 00:06:09.216 "dma_device_id": "system", 00:06:09.216 "dma_device_type": 1 00:06:09.216 }, 00:06:09.216 { 00:06:09.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.216 "dma_device_type": 2 00:06:09.216 } 00:06:09.216 ], 00:06:09.216 "driver_specific": {} 00:06:09.216 }, 00:06:09.216 { 00:06:09.216 "name": "Passthru0", 00:06:09.216 "aliases": [ 00:06:09.216 "78b5b727-e89b-5b34-a83d-66b39b2199be" 00:06:09.216 ], 00:06:09.216 "product_name": "passthru", 00:06:09.216 "block_size": 512, 00:06:09.216 "num_blocks": 16384, 00:06:09.216 "uuid": "78b5b727-e89b-5b34-a83d-66b39b2199be", 00:06:09.216 "assigned_rate_limits": { 00:06:09.216 "rw_ios_per_sec": 0, 00:06:09.216 "rw_mbytes_per_sec": 0, 00:06:09.217 "r_mbytes_per_sec": 0, 00:06:09.217 "w_mbytes_per_sec": 0 00:06:09.217 }, 00:06:09.217 "claimed": false, 00:06:09.217 "zoned": false, 00:06:09.217 "supported_io_types": { 00:06:09.217 "read": true, 00:06:09.217 "write": true, 00:06:09.217 "unmap": true, 00:06:09.217 "flush": true, 00:06:09.217 "reset": true, 00:06:09.217 "nvme_admin": false, 00:06:09.217 "nvme_io": false, 00:06:09.217 "nvme_io_md": false, 00:06:09.217 "write_zeroes": true, 00:06:09.217 "zcopy": true, 00:06:09.217 "get_zone_info": false, 00:06:09.217 "zone_management": false, 00:06:09.217 "zone_append": false, 00:06:09.217 "compare": false, 00:06:09.217 "compare_and_write": false, 00:06:09.217 "abort": true, 00:06:09.217 "seek_hole": false, 00:06:09.217 "seek_data": false, 00:06:09.217 "copy": true, 00:06:09.217 "nvme_iov_md": false 00:06:09.217 }, 00:06:09.217 "memory_domains": [ 00:06:09.217 { 00:06:09.217 "dma_device_id": "system", 00:06:09.217 "dma_device_type": 1 00:06:09.217 }, 00:06:09.217 { 00:06:09.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.217 "dma_device_type": 2 00:06:09.217 } 00:06:09.217 ], 00:06:09.217 "driver_specific": { 00:06:09.217 "passthru": { 00:06:09.217 "name": "Passthru0", 00:06:09.217 "base_bdev_name": "Malloc2" 00:06:09.217 } 00:06:09.217 } 00:06:09.217 } 00:06:09.217 ]' 00:06:09.217 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.217 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.217 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.217 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.217 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.475 00:06:09.475 real 0m0.218s 00:06:09.475 user 0m0.150s 00:06:09.475 sys 0m0.018s 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.475 07:11:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.475 ************************************ 00:06:09.475 END TEST rpc_daemon_integrity 00:06:09.475 ************************************ 00:06:09.475 07:11:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:09.475 07:11:41 rpc -- rpc/rpc.sh@84 -- # killprocess 2349758 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@950 -- # '[' -z 2349758 ']' 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@954 -- # kill -0 2349758 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@955 -- # uname 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2349758 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2349758' 00:06:09.475 killing process with pid 2349758 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@969 -- # kill 2349758 00:06:09.475 07:11:41 rpc -- common/autotest_common.sh@974 -- # wait 2349758 00:06:10.040 00:06:10.040 real 0m1.947s 00:06:10.040 user 0m2.436s 00:06:10.040 sys 0m0.575s 00:06:10.040 07:11:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.040 07:11:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.040 ************************************ 00:06:10.040 END TEST rpc 00:06:10.040 ************************************ 00:06:10.040 07:11:42 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.040 07:11:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.040 07:11:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.040 07:11:42 -- common/autotest_common.sh@10 -- # set +x 00:06:10.040 ************************************ 00:06:10.040 START TEST skip_rpc 00:06:10.040 ************************************ 00:06:10.040 07:11:42 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.040 * Looking for test storage... 00:06:10.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.040 07:11:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.040 07:11:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.040 07:11:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.040 07:11:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.040 07:11:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.040 07:11:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.040 ************************************ 00:06:10.040 START TEST skip_rpc 00:06:10.040 ************************************ 00:06:10.040 07:11:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:10.040 07:11:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2350198 00:06:10.040 07:11:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.040 07:11:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.040 07:11:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:10.040 [2024-07-25 07:11:42.474444] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:10.040 [2024-07-25 07:11:42.474521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350198 ] 00:06:10.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.040 [2024-07-25 07:11:42.540086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.298 [2024-07-25 07:11:42.656898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.557 07:11:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2350198 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2350198 ']' 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2350198 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2350198 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2350198' 00:06:15.558 killing process with pid 2350198 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2350198 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2350198 00:06:15.558 00:06:15.558 real 0m5.496s 00:06:15.558 user 0m5.180s 00:06:15.558 sys 0m0.315s 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.558 07:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.558 ************************************ 00:06:15.558 END TEST skip_rpc 00:06:15.558 ************************************ 00:06:15.558 07:11:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:15.558 07:11:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.558 07:11:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.558 07:11:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.558 ************************************ 00:06:15.558 START TEST skip_rpc_with_json 00:06:15.558 ************************************ 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2350883 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2350883 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2350883 ']' 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.558 07:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.558 [2024-07-25 07:11:48.012138] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:15.558 [2024-07-25 07:11:48.012236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350883 ] 00:06:15.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.558 [2024-07-25 07:11:48.069977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.816 [2024-07-25 07:11:48.182260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.074 [2024-07-25 07:11:48.446618] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:16.074 request: 00:06:16.074 { 00:06:16.074 "trtype": "tcp", 00:06:16.074 "method": "nvmf_get_transports", 00:06:16.074 "req_id": 1 00:06:16.074 } 00:06:16.074 Got JSON-RPC error response 00:06:16.074 response: 00:06:16.074 { 00:06:16.074 "code": -19, 00:06:16.074 "message": "No such device" 00:06:16.074 } 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.074 [2024-07-25 07:11:48.454728] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.074 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.332 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.332 07:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.332 { 00:06:16.332 "subsystems": [ 00:06:16.332 { 00:06:16.332 "subsystem": "vfio_user_target", 00:06:16.332 "config": null 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "subsystem": "keyring", 00:06:16.332 "config": [] 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "subsystem": "iobuf", 00:06:16.332 "config": [ 00:06:16.332 { 00:06:16.332 "method": "iobuf_set_options", 00:06:16.332 "params": { 00:06:16.332 "small_pool_count": 8192, 00:06:16.332 "large_pool_count": 1024, 00:06:16.332 "small_bufsize": 8192, 00:06:16.332 "large_bufsize": 135168 00:06:16.332 } 00:06:16.332 } 00:06:16.332 ] 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "subsystem": "sock", 00:06:16.332 "config": [ 00:06:16.332 { 00:06:16.332 "method": "sock_set_default_impl", 00:06:16.332 "params": { 00:06:16.332 "impl_name": "posix" 00:06:16.332 } 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "method": "sock_impl_set_options", 00:06:16.332 "params": { 00:06:16.332 "impl_name": "ssl", 00:06:16.332 "recv_buf_size": 4096, 00:06:16.332 "send_buf_size": 4096, 00:06:16.332 "enable_recv_pipe": true, 00:06:16.332 "enable_quickack": false, 00:06:16.332 "enable_placement_id": 0, 00:06:16.332 "enable_zerocopy_send_server": true, 00:06:16.332 "enable_zerocopy_send_client": false, 00:06:16.332 "zerocopy_threshold": 0, 00:06:16.332 "tls_version": 0, 00:06:16.332 "enable_ktls": false 00:06:16.332 } 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "method": "sock_impl_set_options", 00:06:16.332 "params": { 00:06:16.332 "impl_name": "posix", 00:06:16.332 "recv_buf_size": 2097152, 00:06:16.332 "send_buf_size": 2097152, 00:06:16.332 "enable_recv_pipe": true, 00:06:16.332 "enable_quickack": false, 00:06:16.332 "enable_placement_id": 0, 00:06:16.332 "enable_zerocopy_send_server": true, 00:06:16.332 "enable_zerocopy_send_client": false, 00:06:16.332 "zerocopy_threshold": 0, 00:06:16.332 "tls_version": 0, 00:06:16.332 "enable_ktls": false 00:06:16.332 } 00:06:16.332 } 00:06:16.332 ] 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "subsystem": "vmd", 00:06:16.332 "config": [] 00:06:16.332 }, 00:06:16.332 { 00:06:16.332 "subsystem": "accel", 00:06:16.333 "config": [ 00:06:16.333 { 00:06:16.333 "method": "accel_set_options", 00:06:16.333 "params": { 00:06:16.333 "small_cache_size": 128, 00:06:16.333 "large_cache_size": 16, 00:06:16.333 "task_count": 2048, 00:06:16.333 "sequence_count": 2048, 00:06:16.333 "buf_count": 2048 00:06:16.333 } 00:06:16.333 } 00:06:16.333 ] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "bdev", 00:06:16.333 "config": [ 00:06:16.333 { 00:06:16.333 "method": "bdev_set_options", 00:06:16.333 "params": { 00:06:16.333 "bdev_io_pool_size": 65535, 00:06:16.333 "bdev_io_cache_size": 256, 00:06:16.333 "bdev_auto_examine": true, 00:06:16.333 "iobuf_small_cache_size": 128, 00:06:16.333 "iobuf_large_cache_size": 16 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "bdev_raid_set_options", 00:06:16.333 "params": { 00:06:16.333 "process_window_size_kb": 1024, 00:06:16.333 "process_max_bandwidth_mb_sec": 0 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "bdev_iscsi_set_options", 00:06:16.333 "params": { 00:06:16.333 "timeout_sec": 30 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "bdev_nvme_set_options", 00:06:16.333 "params": { 00:06:16.333 "action_on_timeout": "none", 00:06:16.333 "timeout_us": 0, 00:06:16.333 "timeout_admin_us": 0, 00:06:16.333 "keep_alive_timeout_ms": 10000, 00:06:16.333 "arbitration_burst": 0, 00:06:16.333 "low_priority_weight": 0, 00:06:16.333 "medium_priority_weight": 0, 00:06:16.333 "high_priority_weight": 0, 00:06:16.333 "nvme_adminq_poll_period_us": 10000, 00:06:16.333 "nvme_ioq_poll_period_us": 0, 00:06:16.333 "io_queue_requests": 0, 00:06:16.333 "delay_cmd_submit": true, 00:06:16.333 "transport_retry_count": 4, 00:06:16.333 "bdev_retry_count": 3, 00:06:16.333 "transport_ack_timeout": 0, 00:06:16.333 "ctrlr_loss_timeout_sec": 0, 00:06:16.333 "reconnect_delay_sec": 0, 00:06:16.333 "fast_io_fail_timeout_sec": 0, 00:06:16.333 "disable_auto_failback": false, 00:06:16.333 "generate_uuids": false, 00:06:16.333 "transport_tos": 0, 00:06:16.333 "nvme_error_stat": false, 00:06:16.333 "rdma_srq_size": 0, 00:06:16.333 "io_path_stat": false, 00:06:16.333 "allow_accel_sequence": false, 00:06:16.333 "rdma_max_cq_size": 0, 00:06:16.333 "rdma_cm_event_timeout_ms": 0, 00:06:16.333 "dhchap_digests": [ 00:06:16.333 "sha256", 00:06:16.333 "sha384", 00:06:16.333 "sha512" 00:06:16.333 ], 00:06:16.333 "dhchap_dhgroups": [ 00:06:16.333 "null", 00:06:16.333 "ffdhe2048", 00:06:16.333 "ffdhe3072", 00:06:16.333 "ffdhe4096", 00:06:16.333 "ffdhe6144", 00:06:16.333 "ffdhe8192" 00:06:16.333 ] 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "bdev_nvme_set_hotplug", 00:06:16.333 "params": { 00:06:16.333 "period_us": 100000, 00:06:16.333 "enable": false 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "bdev_wait_for_examine" 00:06:16.333 } 00:06:16.333 ] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "scsi", 00:06:16.333 "config": null 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "scheduler", 00:06:16.333 "config": [ 00:06:16.333 { 00:06:16.333 "method": "framework_set_scheduler", 00:06:16.333 "params": { 00:06:16.333 "name": "static" 00:06:16.333 } 00:06:16.333 } 00:06:16.333 ] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "vhost_scsi", 00:06:16.333 "config": [] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "vhost_blk", 00:06:16.333 "config": [] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "ublk", 00:06:16.333 "config": [] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "nbd", 00:06:16.333 "config": [] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "nvmf", 00:06:16.333 "config": [ 00:06:16.333 { 00:06:16.333 "method": "nvmf_set_config", 00:06:16.333 "params": { 00:06:16.333 "discovery_filter": "match_any", 00:06:16.333 "admin_cmd_passthru": { 00:06:16.333 "identify_ctrlr": false 00:06:16.333 } 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "nvmf_set_max_subsystems", 00:06:16.333 "params": { 00:06:16.333 "max_subsystems": 1024 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "nvmf_set_crdt", 00:06:16.333 "params": { 00:06:16.333 "crdt1": 0, 00:06:16.333 "crdt2": 0, 00:06:16.333 "crdt3": 0 00:06:16.333 } 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "method": "nvmf_create_transport", 00:06:16.333 "params": { 00:06:16.333 "trtype": "TCP", 00:06:16.333 "max_queue_depth": 128, 00:06:16.333 "max_io_qpairs_per_ctrlr": 127, 00:06:16.333 "in_capsule_data_size": 4096, 00:06:16.333 "max_io_size": 131072, 00:06:16.333 "io_unit_size": 131072, 00:06:16.333 "max_aq_depth": 128, 00:06:16.333 "num_shared_buffers": 511, 00:06:16.333 "buf_cache_size": 4294967295, 00:06:16.333 "dif_insert_or_strip": false, 00:06:16.333 "zcopy": false, 00:06:16.333 "c2h_success": true, 00:06:16.333 "sock_priority": 0, 00:06:16.333 "abort_timeout_sec": 1, 00:06:16.333 "ack_timeout": 0, 00:06:16.333 "data_wr_pool_size": 0 00:06:16.333 } 00:06:16.333 } 00:06:16.333 ] 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "subsystem": "iscsi", 00:06:16.333 "config": [ 00:06:16.333 { 00:06:16.333 "method": "iscsi_set_options", 00:06:16.333 "params": { 00:06:16.333 "node_base": "iqn.2016-06.io.spdk", 00:06:16.333 "max_sessions": 128, 00:06:16.333 "max_connections_per_session": 2, 00:06:16.333 "max_queue_depth": 64, 00:06:16.333 "default_time2wait": 2, 00:06:16.333 "default_time2retain": 20, 00:06:16.333 "first_burst_length": 8192, 00:06:16.333 "immediate_data": true, 00:06:16.333 "allow_duplicated_isid": false, 00:06:16.333 "error_recovery_level": 0, 00:06:16.333 "nop_timeout": 60, 00:06:16.333 "nop_in_interval": 30, 00:06:16.333 "disable_chap": false, 00:06:16.333 "require_chap": false, 00:06:16.333 "mutual_chap": false, 00:06:16.333 "chap_group": 0, 00:06:16.333 "max_large_datain_per_connection": 64, 00:06:16.333 "max_r2t_per_connection": 4, 00:06:16.333 "pdu_pool_size": 36864, 00:06:16.333 "immediate_data_pool_size": 16384, 00:06:16.333 "data_out_pool_size": 2048 00:06:16.333 } 00:06:16.333 } 00:06:16.333 ] 00:06:16.333 } 00:06:16.333 ] 00:06:16.333 } 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2350883 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2350883 ']' 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2350883 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2350883 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2350883' 00:06:16.333 killing process with pid 2350883 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2350883 00:06:16.333 07:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2350883 00:06:16.591 07:11:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2351025 00:06:16.592 07:11:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.592 07:11:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2351025 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2351025 ']' 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2351025 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351025 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351025' 00:06:21.851 killing process with pid 2351025 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2351025 00:06:21.851 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2351025 00:06:22.109 07:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.109 07:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.109 00:06:22.109 real 0m6.642s 00:06:22.109 user 0m6.248s 00:06:22.109 sys 0m0.686s 00:06:22.109 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.109 07:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.109 ************************************ 00:06:22.109 END TEST skip_rpc_with_json 00:06:22.109 ************************************ 00:06:22.109 07:11:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:22.109 07:11:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.109 07:11:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.109 07:11:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 ************************************ 00:06:22.367 START TEST skip_rpc_with_delay 00:06:22.367 ************************************ 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.367 [2024-07-25 07:11:54.713155] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:22.367 [2024-07-25 07:11:54.713261] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.367 00:06:22.367 real 0m0.070s 00:06:22.367 user 0m0.042s 00:06:22.367 sys 0m0.028s 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.367 07:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 ************************************ 00:06:22.367 END TEST skip_rpc_with_delay 00:06:22.367 ************************************ 00:06:22.367 07:11:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:22.367 07:11:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:22.367 07:11:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:22.367 07:11:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.367 07:11:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.367 07:11:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 ************************************ 00:06:22.367 START TEST exit_on_failed_rpc_init 00:06:22.367 ************************************ 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2351743 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2351743 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2351743 ']' 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.367 07:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 [2024-07-25 07:11:54.825503] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:22.367 [2024-07-25 07:11:54.825596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351743 ] 00:06:22.367 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.367 [2024-07-25 07:11:54.887367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.625 [2024-07-25 07:11:55.001049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:23.558 07:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.558 [2024-07-25 07:11:55.816495] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:23.558 [2024-07-25 07:11:55.816588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351778 ] 00:06:23.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.558 [2024-07-25 07:11:55.880158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.558 [2024-07-25 07:11:55.999649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.558 [2024-07-25 07:11:55.999774] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:23.558 [2024-07-25 07:11:55.999796] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:23.558 [2024-07-25 07:11:55.999810] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2351743 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2351743 ']' 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2351743 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351743 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351743' 00:06:23.816 killing process with pid 2351743 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2351743 00:06:23.816 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2351743 00:06:24.383 00:06:24.383 real 0m1.853s 00:06:24.383 user 0m2.221s 00:06:24.383 sys 0m0.482s 00:06:24.383 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.383 07:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.383 ************************************ 00:06:24.383 END TEST exit_on_failed_rpc_init 00:06:24.383 ************************************ 00:06:24.383 07:11:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:24.383 00:06:24.383 real 0m14.304s 00:06:24.383 user 0m13.785s 00:06:24.383 sys 0m1.674s 00:06:24.383 07:11:56 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.383 07:11:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.383 ************************************ 00:06:24.383 END TEST skip_rpc 00:06:24.383 ************************************ 00:06:24.383 07:11:56 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.383 07:11:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.383 07:11:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.383 07:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:24.383 ************************************ 00:06:24.383 START TEST rpc_client 00:06:24.383 ************************************ 00:06:24.383 07:11:56 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.383 * Looking for test storage... 00:06:24.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:24.383 07:11:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:24.383 OK 00:06:24.383 07:11:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:24.383 00:06:24.383 real 0m0.065s 00:06:24.383 user 0m0.024s 00:06:24.383 sys 0m0.047s 00:06:24.383 07:11:56 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.383 07:11:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:24.383 ************************************ 00:06:24.383 END TEST rpc_client 00:06:24.383 ************************************ 00:06:24.383 07:11:56 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.383 07:11:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.383 07:11:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.383 07:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:24.383 ************************************ 00:06:24.383 START TEST json_config 00:06:24.383 ************************************ 00:06:24.383 07:11:56 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.383 07:11:56 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.383 07:11:56 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.383 07:11:56 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.383 07:11:56 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.383 07:11:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.383 07:11:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.383 07:11:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.383 07:11:56 json_config -- paths/export.sh@5 -- # export PATH 00:06:24.383 07:11:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@47 -- # : 0 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.383 07:11:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.384 07:11:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.384 07:11:56 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:24.384 07:11:56 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:24.384 07:11:56 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:24.384 INFO: JSON configuration test init 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.384 07:11:56 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:24.384 07:11:56 json_config -- json_config/common.sh@9 -- # local app=target 00:06:24.384 07:11:56 json_config -- json_config/common.sh@10 -- # shift 00:06:24.384 07:11:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.384 07:11:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.384 07:11:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.384 07:11:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.384 07:11:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.384 07:11:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2352018 00:06:24.384 07:11:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:24.384 07:11:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.384 Waiting for target to run... 00:06:24.384 07:11:56 json_config -- json_config/common.sh@25 -- # waitforlisten 2352018 /var/tmp/spdk_tgt.sock 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@831 -- # '[' -z 2352018 ']' 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.384 07:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.642 [2024-07-25 07:11:56.921101] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:24.642 [2024-07-25 07:11:56.921190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352018 ] 00:06:24.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.899 [2024-07-25 07:11:57.408795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.157 [2024-07-25 07:11:57.516373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.416 07:11:57 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.416 07:11:57 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:25.416 07:11:57 json_config -- json_config/common.sh@26 -- # echo '' 00:06:25.416 00:06:25.416 07:11:57 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:25.416 07:11:57 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:25.416 07:11:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.416 07:11:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.416 07:11:57 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:25.416 07:11:57 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:25.416 07:11:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.416 07:11:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.416 07:11:57 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:25.416 07:11:57 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:25.416 07:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:28.717 07:12:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.717 07:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:28.717 07:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:28.717 07:12:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@51 -- # sort 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:28.974 07:12:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.974 07:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:28.974 07:12:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.974 07:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:28.974 07:12:01 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:28.974 07:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.232 MallocForNvmf0 00:06:29.232 07:12:01 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.232 07:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.489 MallocForNvmf1 00:06:29.489 07:12:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.489 07:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.747 [2024-07-25 07:12:02.094725] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.747 07:12:02 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.747 07:12:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.004 07:12:02 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.004 07:12:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.261 07:12:02 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.261 07:12:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.519 07:12:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.519 07:12:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.776 [2024-07-25 07:12:03.118113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.776 07:12:03 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:30.776 07:12:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.776 07:12:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.776 07:12:03 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:30.776 07:12:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.776 07:12:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.776 07:12:03 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:30.776 07:12:03 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:30.776 07:12:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.032 MallocBdevForConfigChangeCheck 00:06:31.032 07:12:03 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:31.032 07:12:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.032 07:12:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.032 07:12:03 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:31.032 07:12:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.595 07:12:03 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:31.595 INFO: shutting down applications... 00:06:31.595 07:12:03 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:31.595 07:12:03 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:31.595 07:12:03 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:31.595 07:12:03 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.965 Calling clear_iscsi_subsystem 00:06:32.965 Calling clear_nvmf_subsystem 00:06:32.965 Calling clear_nbd_subsystem 00:06:32.965 Calling clear_ublk_subsystem 00:06:32.965 Calling clear_vhost_blk_subsystem 00:06:32.965 Calling clear_vhost_scsi_subsystem 00:06:32.965 Calling clear_bdev_subsystem 00:06:33.223 07:12:05 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:33.223 07:12:05 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:33.223 07:12:05 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:33.223 07:12:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.223 07:12:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:33.223 07:12:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:33.480 07:12:05 json_config -- json_config/json_config.sh@349 -- # break 00:06:33.480 07:12:05 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:33.480 07:12:05 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:33.480 07:12:05 json_config -- json_config/common.sh@31 -- # local app=target 00:06:33.480 07:12:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.480 07:12:05 json_config -- json_config/common.sh@35 -- # [[ -n 2352018 ]] 00:06:33.480 07:12:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2352018 00:06:33.480 07:12:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.480 07:12:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.480 07:12:05 json_config -- json_config/common.sh@41 -- # kill -0 2352018 00:06:33.480 07:12:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.047 07:12:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.047 07:12:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.047 07:12:06 json_config -- json_config/common.sh@41 -- # kill -0 2352018 00:06:34.047 07:12:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.047 07:12:06 json_config -- json_config/common.sh@43 -- # break 00:06:34.047 07:12:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.047 07:12:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.047 SPDK target shutdown done 00:06:34.047 07:12:06 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:34.047 INFO: relaunching applications... 00:06:34.047 07:12:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.047 07:12:06 json_config -- json_config/common.sh@9 -- # local app=target 00:06:34.047 07:12:06 json_config -- json_config/common.sh@10 -- # shift 00:06:34.047 07:12:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:34.047 07:12:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:34.047 07:12:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:34.047 07:12:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.047 07:12:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.047 07:12:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2353442 00:06:34.047 07:12:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.047 07:12:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:34.047 Waiting for target to run... 00:06:34.047 07:12:06 json_config -- json_config/common.sh@25 -- # waitforlisten 2353442 /var/tmp/spdk_tgt.sock 00:06:34.047 07:12:06 json_config -- common/autotest_common.sh@831 -- # '[' -z 2353442 ']' 00:06:34.047 07:12:06 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:34.047 07:12:06 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.047 07:12:06 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:34.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:34.047 07:12:06 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.047 07:12:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.047 [2024-07-25 07:12:06.441301] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:34.047 [2024-07-25 07:12:06.441411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353442 ] 00:06:34.047 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.305 [2024-07-25 07:12:06.804943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.563 [2024-07-25 07:12:06.894897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.842 [2024-07-25 07:12:09.940263] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.842 [2024-07-25 07:12:09.972754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:37.842 07:12:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.842 07:12:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:37.842 07:12:10 json_config -- json_config/common.sh@26 -- # echo '' 00:06:37.843 00:06:37.843 07:12:10 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:37.843 07:12:10 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:37.843 INFO: Checking if target configuration is the same... 00:06:37.843 07:12:10 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.843 07:12:10 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:37.843 07:12:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.843 + '[' 2 -ne 2 ']' 00:06:37.843 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.843 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.843 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.843 +++ basename /dev/fd/62 00:06:37.843 ++ mktemp /tmp/62.XXX 00:06:37.843 + tmp_file_1=/tmp/62.7Ti 00:06:37.843 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.843 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.843 + tmp_file_2=/tmp/spdk_tgt_config.json.0G5 00:06:37.843 + ret=0 00:06:37.843 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.100 + diff -u /tmp/62.7Ti /tmp/spdk_tgt_config.json.0G5 00:06:38.100 + echo 'INFO: JSON config files are the same' 00:06:38.101 INFO: JSON config files are the same 00:06:38.101 + rm /tmp/62.7Ti /tmp/spdk_tgt_config.json.0G5 00:06:38.101 + exit 0 00:06:38.101 07:12:10 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:38.101 07:12:10 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:38.101 INFO: changing configuration and checking if this can be detected... 00:06:38.101 07:12:10 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.101 07:12:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.358 07:12:10 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.358 07:12:10 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:38.358 07:12:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.358 + '[' 2 -ne 2 ']' 00:06:38.358 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:38.358 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:38.358 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:38.358 +++ basename /dev/fd/62 00:06:38.358 ++ mktemp /tmp/62.XXX 00:06:38.358 + tmp_file_1=/tmp/62.kiM 00:06:38.358 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.358 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:38.358 + tmp_file_2=/tmp/spdk_tgt_config.json.yNW 00:06:38.358 + ret=0 00:06:38.358 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.615 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.615 + diff -u /tmp/62.kiM /tmp/spdk_tgt_config.json.yNW 00:06:38.615 + ret=1 00:06:38.615 + echo '=== Start of file: /tmp/62.kiM ===' 00:06:38.615 + cat /tmp/62.kiM 00:06:38.615 + echo '=== End of file: /tmp/62.kiM ===' 00:06:38.615 + echo '' 00:06:38.615 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yNW ===' 00:06:38.615 + cat /tmp/spdk_tgt_config.json.yNW 00:06:38.615 + echo '=== End of file: /tmp/spdk_tgt_config.json.yNW ===' 00:06:38.615 + echo '' 00:06:38.615 + rm /tmp/62.kiM /tmp/spdk_tgt_config.json.yNW 00:06:38.615 + exit 1 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:38.615 INFO: configuration change detected. 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:38.615 07:12:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.615 07:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@321 -- # [[ -n 2353442 ]] 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:38.615 07:12:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.615 07:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:38.615 07:12:11 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:38.615 07:12:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.615 07:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.873 07:12:11 json_config -- json_config/json_config.sh@327 -- # killprocess 2353442 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@950 -- # '[' -z 2353442 ']' 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@954 -- # kill -0 2353442 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@955 -- # uname 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2353442 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2353442' 00:06:38.873 killing process with pid 2353442 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@969 -- # kill 2353442 00:06:38.873 07:12:11 json_config -- common/autotest_common.sh@974 -- # wait 2353442 00:06:40.774 07:12:12 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:40.774 07:12:12 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:40.774 07:12:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.774 07:12:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 07:12:12 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:40.774 07:12:12 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:40.774 INFO: Success 00:06:40.774 00:06:40.774 real 0m16.025s 00:06:40.774 user 0m17.936s 00:06:40.774 sys 0m2.047s 00:06:40.774 07:12:12 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.774 07:12:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 ************************************ 00:06:40.774 END TEST json_config 00:06:40.774 ************************************ 00:06:40.774 07:12:12 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:40.774 07:12:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.774 07:12:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.774 07:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 ************************************ 00:06:40.774 START TEST json_config_extra_key 00:06:40.774 ************************************ 00:06:40.774 07:12:12 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.774 07:12:12 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.774 07:12:12 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.774 07:12:12 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.774 07:12:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.774 07:12:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.774 07:12:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.774 07:12:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:40.774 07:12:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.774 07:12:12 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:40.774 INFO: launching applications... 00:06:40.774 07:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2354854 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:40.774 07:12:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.774 Waiting for target to run... 00:06:40.775 07:12:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2354854 /var/tmp/spdk_tgt.sock 00:06:40.775 07:12:12 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2354854 ']' 00:06:40.775 07:12:12 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.775 07:12:12 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.775 07:12:12 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.775 07:12:12 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.775 07:12:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:40.775 [2024-07-25 07:12:12.982642] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:40.775 [2024-07-25 07:12:12.982746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354854 ] 00:06:40.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.032 [2024-07-25 07:12:13.322279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.032 [2024-07-25 07:12:13.411351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.598 07:12:13 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.598 07:12:13 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:41.598 00:06:41.598 07:12:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:41.598 INFO: shutting down applications... 00:06:41.598 07:12:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2354854 ]] 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2354854 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2354854 00:06:41.598 07:12:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.165 07:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.165 07:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.165 07:12:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2354854 00:06:42.165 07:12:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.421 07:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.421 07:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.421 07:12:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2354854 00:06:42.421 07:12:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.421 07:12:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:42.421 07:12:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.422 07:12:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.422 SPDK target shutdown done 00:06:42.422 07:12:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:42.422 Success 00:06:42.422 00:06:42.422 real 0m2.057s 00:06:42.422 user 0m1.590s 00:06:42.422 sys 0m0.431s 00:06:42.422 07:12:14 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.422 07:12:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.422 ************************************ 00:06:42.422 END TEST json_config_extra_key 00:06:42.422 ************************************ 00:06:42.679 07:12:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.679 07:12:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.679 07:12:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.679 07:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 START TEST alias_rpc 00:06:42.679 ************************************ 00:06:42.679 07:12:14 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.679 * Looking for test storage... 00:06:42.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:42.679 07:12:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.679 07:12:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2355165 00:06:42.679 07:12:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.679 07:12:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2355165 00:06:42.679 07:12:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2355165 ']' 00:06:42.679 07:12:15 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.679 07:12:15 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.679 07:12:15 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.679 07:12:15 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.679 07:12:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 [2024-07-25 07:12:15.081127] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:42.679 [2024-07-25 07:12:15.081203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355165 ] 00:06:42.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.679 [2024-07-25 07:12:15.138684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.936 [2024-07-25 07:12:15.250112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.194 07:12:15 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.194 07:12:15 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:43.194 07:12:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:43.451 07:12:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2355165 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2355165 ']' 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2355165 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2355165 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2355165' 00:06:43.451 killing process with pid 2355165 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@969 -- # kill 2355165 00:06:43.451 07:12:15 alias_rpc -- common/autotest_common.sh@974 -- # wait 2355165 00:06:44.016 00:06:44.016 real 0m1.285s 00:06:44.016 user 0m1.369s 00:06:44.016 sys 0m0.413s 00:06:44.016 07:12:16 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.016 07:12:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.016 ************************************ 00:06:44.016 END TEST alias_rpc 00:06:44.016 ************************************ 00:06:44.016 07:12:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:44.016 07:12:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.016 07:12:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.016 07:12:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.016 07:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:44.016 ************************************ 00:06:44.016 START TEST spdkcli_tcp 00:06:44.016 ************************************ 00:06:44.016 07:12:16 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.016 * Looking for test storage... 00:06:44.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:44.016 07:12:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.016 07:12:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2355356 00:06:44.016 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:44.017 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2355356 00:06:44.017 07:12:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2355356 ']' 00:06:44.017 07:12:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.017 07:12:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.017 07:12:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.017 07:12:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.017 07:12:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.017 [2024-07-25 07:12:16.424110] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:44.017 [2024-07-25 07:12:16.424193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355356 ] 00:06:44.017 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.017 [2024-07-25 07:12:16.482148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.275 [2024-07-25 07:12:16.590770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.275 [2024-07-25 07:12:16.590776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.532 07:12:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.532 07:12:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:44.532 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2355369 00:06:44.532 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:44.532 07:12:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:44.788 [ 00:06:44.788 "bdev_malloc_delete", 00:06:44.788 "bdev_malloc_create", 00:06:44.788 "bdev_null_resize", 00:06:44.788 "bdev_null_delete", 00:06:44.788 "bdev_null_create", 00:06:44.788 "bdev_nvme_cuse_unregister", 00:06:44.788 "bdev_nvme_cuse_register", 00:06:44.788 "bdev_opal_new_user", 00:06:44.788 "bdev_opal_set_lock_state", 00:06:44.788 "bdev_opal_delete", 00:06:44.788 "bdev_opal_get_info", 00:06:44.788 "bdev_opal_create", 00:06:44.788 "bdev_nvme_opal_revert", 00:06:44.788 "bdev_nvme_opal_init", 00:06:44.788 "bdev_nvme_send_cmd", 00:06:44.788 "bdev_nvme_get_path_iostat", 00:06:44.788 "bdev_nvme_get_mdns_discovery_info", 00:06:44.788 "bdev_nvme_stop_mdns_discovery", 00:06:44.789 "bdev_nvme_start_mdns_discovery", 00:06:44.789 "bdev_nvme_set_multipath_policy", 00:06:44.789 "bdev_nvme_set_preferred_path", 00:06:44.789 "bdev_nvme_get_io_paths", 00:06:44.789 "bdev_nvme_remove_error_injection", 00:06:44.789 "bdev_nvme_add_error_injection", 00:06:44.789 "bdev_nvme_get_discovery_info", 00:06:44.789 "bdev_nvme_stop_discovery", 00:06:44.789 "bdev_nvme_start_discovery", 00:06:44.789 "bdev_nvme_get_controller_health_info", 00:06:44.789 "bdev_nvme_disable_controller", 00:06:44.789 "bdev_nvme_enable_controller", 00:06:44.789 "bdev_nvme_reset_controller", 00:06:44.789 "bdev_nvme_get_transport_statistics", 00:06:44.789 "bdev_nvme_apply_firmware", 00:06:44.789 "bdev_nvme_detach_controller", 00:06:44.789 "bdev_nvme_get_controllers", 00:06:44.789 "bdev_nvme_attach_controller", 00:06:44.789 "bdev_nvme_set_hotplug", 00:06:44.789 "bdev_nvme_set_options", 00:06:44.789 "bdev_passthru_delete", 00:06:44.789 "bdev_passthru_create", 00:06:44.789 "bdev_lvol_set_parent_bdev", 00:06:44.789 "bdev_lvol_set_parent", 00:06:44.789 "bdev_lvol_check_shallow_copy", 00:06:44.789 "bdev_lvol_start_shallow_copy", 00:06:44.789 "bdev_lvol_grow_lvstore", 00:06:44.789 "bdev_lvol_get_lvols", 00:06:44.789 "bdev_lvol_get_lvstores", 00:06:44.789 "bdev_lvol_delete", 00:06:44.789 "bdev_lvol_set_read_only", 00:06:44.789 "bdev_lvol_resize", 00:06:44.789 "bdev_lvol_decouple_parent", 00:06:44.789 "bdev_lvol_inflate", 00:06:44.789 "bdev_lvol_rename", 00:06:44.789 "bdev_lvol_clone_bdev", 00:06:44.789 "bdev_lvol_clone", 00:06:44.789 "bdev_lvol_snapshot", 00:06:44.789 "bdev_lvol_create", 00:06:44.789 "bdev_lvol_delete_lvstore", 00:06:44.789 "bdev_lvol_rename_lvstore", 00:06:44.789 "bdev_lvol_create_lvstore", 00:06:44.789 "bdev_raid_set_options", 00:06:44.789 "bdev_raid_remove_base_bdev", 00:06:44.789 "bdev_raid_add_base_bdev", 00:06:44.789 "bdev_raid_delete", 00:06:44.789 "bdev_raid_create", 00:06:44.789 "bdev_raid_get_bdevs", 00:06:44.789 "bdev_error_inject_error", 00:06:44.789 "bdev_error_delete", 00:06:44.789 "bdev_error_create", 00:06:44.789 "bdev_split_delete", 00:06:44.789 "bdev_split_create", 00:06:44.789 "bdev_delay_delete", 00:06:44.789 "bdev_delay_create", 00:06:44.789 "bdev_delay_update_latency", 00:06:44.789 "bdev_zone_block_delete", 00:06:44.789 "bdev_zone_block_create", 00:06:44.789 "blobfs_create", 00:06:44.789 "blobfs_detect", 00:06:44.789 "blobfs_set_cache_size", 00:06:44.789 "bdev_aio_delete", 00:06:44.789 "bdev_aio_rescan", 00:06:44.789 "bdev_aio_create", 00:06:44.789 "bdev_ftl_set_property", 00:06:44.789 "bdev_ftl_get_properties", 00:06:44.789 "bdev_ftl_get_stats", 00:06:44.789 "bdev_ftl_unmap", 00:06:44.789 "bdev_ftl_unload", 00:06:44.789 "bdev_ftl_delete", 00:06:44.789 "bdev_ftl_load", 00:06:44.789 "bdev_ftl_create", 00:06:44.789 "bdev_virtio_attach_controller", 00:06:44.789 "bdev_virtio_scsi_get_devices", 00:06:44.789 "bdev_virtio_detach_controller", 00:06:44.789 "bdev_virtio_blk_set_hotplug", 00:06:44.789 "bdev_iscsi_delete", 00:06:44.789 "bdev_iscsi_create", 00:06:44.789 "bdev_iscsi_set_options", 00:06:44.789 "accel_error_inject_error", 00:06:44.789 "ioat_scan_accel_module", 00:06:44.789 "dsa_scan_accel_module", 00:06:44.789 "iaa_scan_accel_module", 00:06:44.789 "vfu_virtio_create_scsi_endpoint", 00:06:44.789 "vfu_virtio_scsi_remove_target", 00:06:44.789 "vfu_virtio_scsi_add_target", 00:06:44.789 "vfu_virtio_create_blk_endpoint", 00:06:44.789 "vfu_virtio_delete_endpoint", 00:06:44.789 "keyring_file_remove_key", 00:06:44.789 "keyring_file_add_key", 00:06:44.789 "keyring_linux_set_options", 00:06:44.789 "iscsi_get_histogram", 00:06:44.789 "iscsi_enable_histogram", 00:06:44.789 "iscsi_set_options", 00:06:44.789 "iscsi_get_auth_groups", 00:06:44.789 "iscsi_auth_group_remove_secret", 00:06:44.789 "iscsi_auth_group_add_secret", 00:06:44.789 "iscsi_delete_auth_group", 00:06:44.789 "iscsi_create_auth_group", 00:06:44.789 "iscsi_set_discovery_auth", 00:06:44.789 "iscsi_get_options", 00:06:44.789 "iscsi_target_node_request_logout", 00:06:44.789 "iscsi_target_node_set_redirect", 00:06:44.789 "iscsi_target_node_set_auth", 00:06:44.789 "iscsi_target_node_add_lun", 00:06:44.789 "iscsi_get_stats", 00:06:44.789 "iscsi_get_connections", 00:06:44.789 "iscsi_portal_group_set_auth", 00:06:44.789 "iscsi_start_portal_group", 00:06:44.789 "iscsi_delete_portal_group", 00:06:44.789 "iscsi_create_portal_group", 00:06:44.789 "iscsi_get_portal_groups", 00:06:44.789 "iscsi_delete_target_node", 00:06:44.789 "iscsi_target_node_remove_pg_ig_maps", 00:06:44.789 "iscsi_target_node_add_pg_ig_maps", 00:06:44.789 "iscsi_create_target_node", 00:06:44.789 "iscsi_get_target_nodes", 00:06:44.789 "iscsi_delete_initiator_group", 00:06:44.789 "iscsi_initiator_group_remove_initiators", 00:06:44.789 "iscsi_initiator_group_add_initiators", 00:06:44.789 "iscsi_create_initiator_group", 00:06:44.789 "iscsi_get_initiator_groups", 00:06:44.789 "nvmf_set_crdt", 00:06:44.789 "nvmf_set_config", 00:06:44.789 "nvmf_set_max_subsystems", 00:06:44.789 "nvmf_stop_mdns_prr", 00:06:44.789 "nvmf_publish_mdns_prr", 00:06:44.789 "nvmf_subsystem_get_listeners", 00:06:44.789 "nvmf_subsystem_get_qpairs", 00:06:44.789 "nvmf_subsystem_get_controllers", 00:06:44.789 "nvmf_get_stats", 00:06:44.789 "nvmf_get_transports", 00:06:44.789 "nvmf_create_transport", 00:06:44.789 "nvmf_get_targets", 00:06:44.789 "nvmf_delete_target", 00:06:44.789 "nvmf_create_target", 00:06:44.789 "nvmf_subsystem_allow_any_host", 00:06:44.789 "nvmf_subsystem_remove_host", 00:06:44.789 "nvmf_subsystem_add_host", 00:06:44.789 "nvmf_ns_remove_host", 00:06:44.789 "nvmf_ns_add_host", 00:06:44.789 "nvmf_subsystem_remove_ns", 00:06:44.789 "nvmf_subsystem_add_ns", 00:06:44.789 "nvmf_subsystem_listener_set_ana_state", 00:06:44.789 "nvmf_discovery_get_referrals", 00:06:44.789 "nvmf_discovery_remove_referral", 00:06:44.789 "nvmf_discovery_add_referral", 00:06:44.789 "nvmf_subsystem_remove_listener", 00:06:44.789 "nvmf_subsystem_add_listener", 00:06:44.789 "nvmf_delete_subsystem", 00:06:44.789 "nvmf_create_subsystem", 00:06:44.789 "nvmf_get_subsystems", 00:06:44.789 "env_dpdk_get_mem_stats", 00:06:44.789 "nbd_get_disks", 00:06:44.789 "nbd_stop_disk", 00:06:44.789 "nbd_start_disk", 00:06:44.789 "ublk_recover_disk", 00:06:44.789 "ublk_get_disks", 00:06:44.789 "ublk_stop_disk", 00:06:44.789 "ublk_start_disk", 00:06:44.789 "ublk_destroy_target", 00:06:44.789 "ublk_create_target", 00:06:44.789 "virtio_blk_create_transport", 00:06:44.789 "virtio_blk_get_transports", 00:06:44.789 "vhost_controller_set_coalescing", 00:06:44.789 "vhost_get_controllers", 00:06:44.789 "vhost_delete_controller", 00:06:44.789 "vhost_create_blk_controller", 00:06:44.789 "vhost_scsi_controller_remove_target", 00:06:44.789 "vhost_scsi_controller_add_target", 00:06:44.789 "vhost_start_scsi_controller", 00:06:44.789 "vhost_create_scsi_controller", 00:06:44.789 "thread_set_cpumask", 00:06:44.789 "scheduler_set_options", 00:06:44.789 "framework_get_governor", 00:06:44.789 "framework_get_scheduler", 00:06:44.789 "framework_set_scheduler", 00:06:44.789 "framework_get_reactors", 00:06:44.789 "thread_get_io_channels", 00:06:44.789 "thread_get_pollers", 00:06:44.789 "thread_get_stats", 00:06:44.789 "framework_monitor_context_switch", 00:06:44.789 "spdk_kill_instance", 00:06:44.789 "log_enable_timestamps", 00:06:44.789 "log_get_flags", 00:06:44.789 "log_clear_flag", 00:06:44.789 "log_set_flag", 00:06:44.789 "log_get_level", 00:06:44.789 "log_set_level", 00:06:44.789 "log_get_print_level", 00:06:44.789 "log_set_print_level", 00:06:44.789 "framework_enable_cpumask_locks", 00:06:44.789 "framework_disable_cpumask_locks", 00:06:44.789 "framework_wait_init", 00:06:44.789 "framework_start_init", 00:06:44.789 "scsi_get_devices", 00:06:44.789 "bdev_get_histogram", 00:06:44.789 "bdev_enable_histogram", 00:06:44.789 "bdev_set_qos_limit", 00:06:44.789 "bdev_set_qd_sampling_period", 00:06:44.789 "bdev_get_bdevs", 00:06:44.789 "bdev_reset_iostat", 00:06:44.789 "bdev_get_iostat", 00:06:44.789 "bdev_examine", 00:06:44.789 "bdev_wait_for_examine", 00:06:44.789 "bdev_set_options", 00:06:44.789 "notify_get_notifications", 00:06:44.789 "notify_get_types", 00:06:44.789 "accel_get_stats", 00:06:44.789 "accel_set_options", 00:06:44.789 "accel_set_driver", 00:06:44.789 "accel_crypto_key_destroy", 00:06:44.789 "accel_crypto_keys_get", 00:06:44.789 "accel_crypto_key_create", 00:06:44.789 "accel_assign_opc", 00:06:44.789 "accel_get_module_info", 00:06:44.789 "accel_get_opc_assignments", 00:06:44.789 "vmd_rescan", 00:06:44.789 "vmd_remove_device", 00:06:44.789 "vmd_enable", 00:06:44.789 "sock_get_default_impl", 00:06:44.789 "sock_set_default_impl", 00:06:44.789 "sock_impl_set_options", 00:06:44.789 "sock_impl_get_options", 00:06:44.789 "iobuf_get_stats", 00:06:44.789 "iobuf_set_options", 00:06:44.789 "keyring_get_keys", 00:06:44.789 "framework_get_pci_devices", 00:06:44.789 "framework_get_config", 00:06:44.789 "framework_get_subsystems", 00:06:44.789 "vfu_tgt_set_base_path", 00:06:44.789 "trace_get_info", 00:06:44.789 "trace_get_tpoint_group_mask", 00:06:44.789 "trace_disable_tpoint_group", 00:06:44.789 "trace_enable_tpoint_group", 00:06:44.789 "trace_clear_tpoint_mask", 00:06:44.789 "trace_set_tpoint_mask", 00:06:44.789 "spdk_get_version", 00:06:44.789 "rpc_get_methods" 00:06:44.789 ] 00:06:44.789 07:12:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.790 07:12:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:44.790 07:12:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2355356 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2355356 ']' 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2355356 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2355356 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2355356' 00:06:44.790 killing process with pid 2355356 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2355356 00:06:44.790 07:12:17 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2355356 00:06:45.354 00:06:45.354 real 0m1.290s 00:06:45.354 user 0m2.225s 00:06:45.354 sys 0m0.458s 00:06:45.354 07:12:17 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.354 07:12:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.354 ************************************ 00:06:45.354 END TEST spdkcli_tcp 00:06:45.354 ************************************ 00:06:45.354 07:12:17 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.354 07:12:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.354 07:12:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.354 07:12:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.354 ************************************ 00:06:45.354 START TEST dpdk_mem_utility 00:06:45.354 ************************************ 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.354 * Looking for test storage... 00:06:45.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:45.354 07:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.354 07:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2355565 00:06:45.354 07:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.354 07:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2355565 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2355565 ']' 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.354 07:12:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.354 [2024-07-25 07:12:17.761330] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:45.354 [2024-07-25 07:12:17.761407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355565 ] 00:06:45.354 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.354 [2024-07-25 07:12:17.817725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.611 [2024-07-25 07:12:17.923635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.869 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.869 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:45.869 07:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.869 07:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.869 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.869 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.869 { 00:06:45.869 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.869 } 00:06:45.869 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.869 07:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.869 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:45.869 1 heaps totaling size 814.000000 MiB 00:06:45.869 size: 814.000000 MiB heap id: 0 00:06:45.869 end heaps---------- 00:06:45.869 8 mempools totaling size 598.116089 MiB 00:06:45.869 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.869 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.869 size: 84.521057 MiB name: bdev_io_2355565 00:06:45.869 size: 51.011292 MiB name: evtpool_2355565 00:06:45.869 size: 50.003479 MiB name: msgpool_2355565 00:06:45.869 size: 21.763794 MiB name: PDU_Pool 00:06:45.869 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.869 size: 0.026123 MiB name: Session_Pool 00:06:45.869 end mempools------- 00:06:45.869 6 memzones totaling size 4.142822 MiB 00:06:45.869 size: 1.000366 MiB name: RG_ring_0_2355565 00:06:45.869 size: 1.000366 MiB name: RG_ring_1_2355565 00:06:45.869 size: 1.000366 MiB name: RG_ring_4_2355565 00:06:45.869 size: 1.000366 MiB name: RG_ring_5_2355565 00:06:45.869 size: 0.125366 MiB name: RG_ring_2_2355565 00:06:45.869 size: 0.015991 MiB name: RG_ring_3_2355565 00:06:45.869 end memzones------- 00:06:45.869 07:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.869 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:45.869 list of free elements. size: 12.519348 MiB 00:06:45.869 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:45.869 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:45.869 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:45.869 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:45.869 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:45.869 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:45.869 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:45.869 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:45.869 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:45.869 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:45.869 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:45.869 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:45.869 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:45.869 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:45.869 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:45.869 list of standard malloc elements. size: 199.218079 MiB 00:06:45.869 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:45.869 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:45.869 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:45.869 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:45.869 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:45.869 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:45.869 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:45.869 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:45.869 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:45.869 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:45.869 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:45.869 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:45.869 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:45.869 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:45.869 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:45.869 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:45.870 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:45.870 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:45.870 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:45.870 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:45.870 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:45.870 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:45.870 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:45.870 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:45.870 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:45.870 list of memzone associated elements. size: 602.262573 MiB 00:06:45.870 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:45.870 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:45.870 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:45.870 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:45.870 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:45.870 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2355565_0 00:06:45.870 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:45.870 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2355565_0 00:06:45.870 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:45.870 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2355565_0 00:06:45.870 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:45.870 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:45.870 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:45.870 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:45.870 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:45.870 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2355565 00:06:45.870 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:45.870 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2355565 00:06:45.870 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:45.870 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2355565 00:06:45.870 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:45.870 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:45.870 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:45.870 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:45.870 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:45.870 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:45.870 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:45.870 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:45.870 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:45.870 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2355565 00:06:45.870 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:45.870 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2355565 00:06:45.870 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:45.870 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2355565 00:06:45.870 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:45.870 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2355565 00:06:45.870 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:45.870 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2355565 00:06:45.870 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:45.870 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:45.870 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:45.870 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:45.870 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:45.870 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:45.870 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:45.870 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2355565 00:06:45.870 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:45.870 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:45.870 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:45.870 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:45.870 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:45.870 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2355565 00:06:45.870 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:45.870 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:45.870 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:45.870 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2355565 00:06:45.870 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:45.870 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2355565 00:06:45.870 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:45.870 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:45.870 07:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:45.870 07:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2355565 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2355565 ']' 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2355565 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2355565 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2355565' 00:06:45.870 killing process with pid 2355565 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2355565 00:06:45.870 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2355565 00:06:46.451 00:06:46.451 real 0m1.146s 00:06:46.451 user 0m1.107s 00:06:46.451 sys 0m0.414s 00:06:46.451 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.451 07:12:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.451 ************************************ 00:06:46.451 END TEST dpdk_mem_utility 00:06:46.451 ************************************ 00:06:46.451 07:12:18 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:46.451 07:12:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.451 07:12:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.451 07:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:46.451 ************************************ 00:06:46.451 START TEST event 00:06:46.451 ************************************ 00:06:46.451 07:12:18 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:46.451 * Looking for test storage... 00:06:46.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:46.451 07:12:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:46.451 07:12:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.451 07:12:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.451 07:12:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:46.451 07:12:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.451 07:12:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.451 ************************************ 00:06:46.451 START TEST event_perf 00:06:46.451 ************************************ 00:06:46.451 07:12:18 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.451 Running I/O for 1 seconds...[2024-07-25 07:12:18.946393] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:46.451 [2024-07-25 07:12:18.946470] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355755 ] 00:06:46.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.708 [2024-07-25 07:12:19.009508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.708 [2024-07-25 07:12:19.127861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.708 [2024-07-25 07:12:19.127932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.708 [2024-07-25 07:12:19.128023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.708 [2024-07-25 07:12:19.128026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.079 Running I/O for 1 seconds... 00:06:48.079 lcore 0: 224491 00:06:48.079 lcore 1: 224490 00:06:48.079 lcore 2: 224490 00:06:48.079 lcore 3: 224490 00:06:48.079 done. 00:06:48.079 00:06:48.079 real 0m1.320s 00:06:48.079 user 0m4.233s 00:06:48.079 sys 0m0.081s 00:06:48.079 07:12:20 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.079 07:12:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.079 ************************************ 00:06:48.079 END TEST event_perf 00:06:48.079 ************************************ 00:06:48.079 07:12:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:48.079 07:12:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:48.079 07:12:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.079 07:12:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.079 ************************************ 00:06:48.079 START TEST event_reactor 00:06:48.079 ************************************ 00:06:48.079 07:12:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:48.079 [2024-07-25 07:12:20.312737] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:48.079 [2024-07-25 07:12:20.312803] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355910 ] 00:06:48.079 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.079 [2024-07-25 07:12:20.375655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.079 [2024-07-25 07:12:20.494446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.448 test_start 00:06:49.448 oneshot 00:06:49.448 tick 100 00:06:49.448 tick 100 00:06:49.448 tick 250 00:06:49.448 tick 100 00:06:49.448 tick 100 00:06:49.448 tick 100 00:06:49.448 tick 250 00:06:49.448 tick 500 00:06:49.448 tick 100 00:06:49.448 tick 100 00:06:49.448 tick 250 00:06:49.448 tick 100 00:06:49.448 tick 100 00:06:49.448 test_end 00:06:49.448 00:06:49.448 real 0m1.316s 00:06:49.448 user 0m1.228s 00:06:49.448 sys 0m0.083s 00:06:49.449 07:12:21 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.449 07:12:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:49.449 ************************************ 00:06:49.449 END TEST event_reactor 00:06:49.449 ************************************ 00:06:49.449 07:12:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.449 07:12:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:49.449 07:12:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.449 07:12:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.449 ************************************ 00:06:49.449 START TEST event_reactor_perf 00:06:49.449 ************************************ 00:06:49.449 07:12:21 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.449 [2024-07-25 07:12:21.671940] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:49.449 [2024-07-25 07:12:21.672011] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356158 ] 00:06:49.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.449 [2024-07-25 07:12:21.738399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.449 [2024-07-25 07:12:21.856650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.820 test_start 00:06:50.820 test_end 00:06:50.820 Performance: 354916 events per second 00:06:50.820 00:06:50.820 real 0m1.321s 00:06:50.820 user 0m1.234s 00:06:50.820 sys 0m0.083s 00:06:50.820 07:12:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.820 07:12:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.820 ************************************ 00:06:50.820 END TEST event_reactor_perf 00:06:50.820 ************************************ 00:06:50.820 07:12:23 event -- event/event.sh@49 -- # uname -s 00:06:50.820 07:12:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:50.820 07:12:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:50.820 07:12:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.820 07:12:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.820 07:12:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.820 ************************************ 00:06:50.820 START TEST event_scheduler 00:06:50.820 ************************************ 00:06:50.820 07:12:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:50.820 * Looking for test storage... 00:06:50.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:50.820 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:50.820 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2356368 00:06:50.820 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:50.820 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.820 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2356368 00:06:50.820 07:12:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2356368 ']' 00:06:50.820 07:12:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.820 07:12:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.821 [2024-07-25 07:12:23.128760] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:50.821 [2024-07-25 07:12:23.128850] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356368 ] 00:06:50.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.821 [2024-07-25 07:12:23.185982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.821 [2024-07-25 07:12:23.296287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.821 [2024-07-25 07:12:23.296343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.821 [2024-07-25 07:12:23.296409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.821 [2024-07-25 07:12:23.296413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:50.821 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.821 [2024-07-25 07:12:23.345144] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:50.821 [2024-07-25 07:12:23.345168] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:50.821 [2024-07-25 07:12:23.345200] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:50.821 [2024-07-25 07:12:23.345211] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:50.821 [2024-07-25 07:12:23.345221] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.821 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.821 07:12:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 [2024-07-25 07:12:23.442849] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:51.078 07:12:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:51.078 07:12:23 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.078 07:12:23 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 ************************************ 00:06:51.078 START TEST scheduler_create_thread 00:06:51.078 ************************************ 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 2 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 3 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 4 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 5 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 6 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 7 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 8 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 9 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 10 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.078 07:12:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.643 07:12:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.643 00:06:51.643 real 0m0.591s 00:06:51.643 user 0m0.008s 00:06:51.643 sys 0m0.005s 00:06:51.643 07:12:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.643 07:12:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.643 ************************************ 00:06:51.643 END TEST scheduler_create_thread 00:06:51.643 ************************************ 00:06:51.643 07:12:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.643 07:12:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2356368 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2356368 ']' 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2356368 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2356368 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2356368' 00:06:51.643 killing process with pid 2356368 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2356368 00:06:51.643 07:12:24 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2356368 00:06:52.209 [2024-07-25 07:12:24.543131] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:52.467 00:06:52.467 real 0m1.770s 00:06:52.467 user 0m2.253s 00:06:52.467 sys 0m0.322s 00:06:52.467 07:12:24 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.467 07:12:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 ************************************ 00:06:52.467 END TEST event_scheduler 00:06:52.467 ************************************ 00:06:52.467 07:12:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:52.467 07:12:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:52.467 07:12:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.467 07:12:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.467 07:12:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 ************************************ 00:06:52.467 START TEST app_repeat 00:06:52.467 ************************************ 00:06:52.467 07:12:24 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2356564 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2356564' 00:06:52.467 Process app_repeat pid: 2356564 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.467 07:12:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:52.467 spdk_app_start Round 0 00:06:52.468 07:12:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2356564 /var/tmp/spdk-nbd.sock 00:06:52.468 07:12:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2356564 ']' 00:06:52.468 07:12:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.468 07:12:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.468 07:12:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.468 07:12:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.468 07:12:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.468 [2024-07-25 07:12:24.885161] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:06:52.468 [2024-07-25 07:12:24.885229] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356564 ] 00:06:52.468 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.468 [2024-07-25 07:12:24.948747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.726 [2024-07-25 07:12:25.068519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.726 [2024-07-25 07:12:25.068524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.726 07:12:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.726 07:12:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:52.726 07:12:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.984 Malloc0 00:06:52.984 07:12:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.243 Malloc1 00:06:53.243 07:12:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.243 07:12:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.501 /dev/nbd0 00:06:53.501 07:12:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.501 07:12:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.501 1+0 records in 00:06:53.501 1+0 records out 00:06:53.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018574 s, 22.1 MB/s 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.501 07:12:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:53.501 07:12:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.501 07:12:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.501 07:12:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.759 /dev/nbd1 00:06:53.759 07:12:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.759 07:12:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.759 1+0 records in 00:06:53.759 1+0 records out 00:06:53.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201186 s, 20.4 MB/s 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.759 07:12:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.017 07:12:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.017 07:12:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.017 { 00:06:54.017 "nbd_device": "/dev/nbd0", 00:06:54.017 "bdev_name": "Malloc0" 00:06:54.017 }, 00:06:54.017 { 00:06:54.017 "nbd_device": "/dev/nbd1", 00:06:54.017 "bdev_name": "Malloc1" 00:06:54.017 } 00:06:54.017 ]' 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.017 { 00:06:54.017 "nbd_device": "/dev/nbd0", 00:06:54.017 "bdev_name": "Malloc0" 00:06:54.017 }, 00:06:54.017 { 00:06:54.017 "nbd_device": "/dev/nbd1", 00:06:54.017 "bdev_name": "Malloc1" 00:06:54.017 } 00:06:54.017 ]' 00:06:54.017 07:12:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.275 /dev/nbd1' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.275 /dev/nbd1' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.275 256+0 records in 00:06:54.275 256+0 records out 00:06:54.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377127 s, 278 MB/s 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.275 256+0 records in 00:06:54.275 256+0 records out 00:06:54.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211963 s, 49.5 MB/s 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.275 256+0 records in 00:06:54.275 256+0 records out 00:06:54.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274766 s, 38.2 MB/s 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.275 07:12:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.534 07:12:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.535 07:12:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.792 07:12:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.050 07:12:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.050 07:12:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.308 07:12:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.565 [2024-07-25 07:12:28.026059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.824 [2024-07-25 07:12:28.141793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.824 [2024-07-25 07:12:28.141793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.824 [2024-07-25 07:12:28.203445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.824 [2024-07-25 07:12:28.203517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.352 07:12:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.352 07:12:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.352 spdk_app_start Round 1 00:06:58.352 07:12:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2356564 /var/tmp/spdk-nbd.sock 00:06:58.352 07:12:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2356564 ']' 00:06:58.352 07:12:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.352 07:12:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.352 07:12:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.352 07:12:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.352 07:12:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.610 07:12:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.610 07:12:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:58.610 07:12:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.867 Malloc0 00:06:58.867 07:12:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.125 Malloc1 00:06:59.125 07:12:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.125 07:12:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.382 /dev/nbd0 00:06:59.382 07:12:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.382 07:12:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.382 07:12:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.382 1+0 records in 00:06:59.382 1+0 records out 00:06:59.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192085 s, 21.3 MB/s 00:06:59.383 07:12:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.383 07:12:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:59.383 07:12:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.383 07:12:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.383 07:12:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:59.383 07:12:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.383 07:12:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.383 07:12:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.640 /dev/nbd1 00:06:59.640 07:12:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.640 07:12:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.640 1+0 records in 00:06:59.640 1+0 records out 00:06:59.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192805 s, 21.2 MB/s 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.640 07:12:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:59.641 07:12:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.641 07:12:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.641 07:12:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.641 07:12:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.641 07:12:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.898 07:12:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.898 { 00:06:59.898 "nbd_device": "/dev/nbd0", 00:06:59.898 "bdev_name": "Malloc0" 00:06:59.898 }, 00:06:59.898 { 00:06:59.899 "nbd_device": "/dev/nbd1", 00:06:59.899 "bdev_name": "Malloc1" 00:06:59.899 } 00:06:59.899 ]' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.899 { 00:06:59.899 "nbd_device": "/dev/nbd0", 00:06:59.899 "bdev_name": "Malloc0" 00:06:59.899 }, 00:06:59.899 { 00:06:59.899 "nbd_device": "/dev/nbd1", 00:06:59.899 "bdev_name": "Malloc1" 00:06:59.899 } 00:06:59.899 ]' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.899 /dev/nbd1' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.899 /dev/nbd1' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.899 256+0 records in 00:06:59.899 256+0 records out 00:06:59.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405294 s, 259 MB/s 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.899 07:12:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.156 256+0 records in 00:07:00.156 256+0 records out 00:07:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278821 s, 37.6 MB/s 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.156 256+0 records in 00:07:00.156 256+0 records out 00:07:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273219 s, 38.4 MB/s 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.156 07:12:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.157 07:12:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.157 07:12:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.157 07:12:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.157 07:12:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.415 07:12:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.672 07:12:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.930 07:12:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.930 07:12:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.188 07:12:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.445 [2024-07-25 07:12:33.882592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.703 [2024-07-25 07:12:33.999881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.703 [2024-07-25 07:12:33.999884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.703 [2024-07-25 07:12:34.057166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.703 [2024-07-25 07:12:34.057235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.228 07:12:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.228 07:12:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:04.228 spdk_app_start Round 2 00:07:04.228 07:12:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2356564 /var/tmp/spdk-nbd.sock 00:07:04.228 07:12:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2356564 ']' 00:07:04.228 07:12:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.228 07:12:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.228 07:12:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.228 07:12:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.228 07:12:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.486 07:12:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.486 07:12:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:04.486 07:12:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.744 Malloc0 00:07:04.744 07:12:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.001 Malloc1 00:07:05.001 07:12:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.001 07:12:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.259 /dev/nbd0 00:07:05.259 07:12:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.259 07:12:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.259 07:12:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:05.259 07:12:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.260 1+0 records in 00:07:05.260 1+0 records out 00:07:05.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185598 s, 22.1 MB/s 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.260 07:12:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:05.260 07:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.260 07:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.260 07:12:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.517 /dev/nbd1 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.517 1+0 records in 00:07:05.517 1+0 records out 00:07:05.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216989 s, 18.9 MB/s 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.517 07:12:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.517 07:12:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.775 { 00:07:05.775 "nbd_device": "/dev/nbd0", 00:07:05.775 "bdev_name": "Malloc0" 00:07:05.775 }, 00:07:05.775 { 00:07:05.775 "nbd_device": "/dev/nbd1", 00:07:05.775 "bdev_name": "Malloc1" 00:07:05.775 } 00:07:05.775 ]' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.775 { 00:07:05.775 "nbd_device": "/dev/nbd0", 00:07:05.775 "bdev_name": "Malloc0" 00:07:05.775 }, 00:07:05.775 { 00:07:05.775 "nbd_device": "/dev/nbd1", 00:07:05.775 "bdev_name": "Malloc1" 00:07:05.775 } 00:07:05.775 ]' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.775 /dev/nbd1' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.775 /dev/nbd1' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.775 256+0 records in 00:07:05.775 256+0 records out 00:07:05.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500017 s, 210 MB/s 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.775 256+0 records in 00:07:05.775 256+0 records out 00:07:05.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024064 s, 43.6 MB/s 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.775 256+0 records in 00:07:05.775 256+0 records out 00:07:05.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289907 s, 36.2 MB/s 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.775 07:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.032 07:12:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.290 07:12:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.548 07:12:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.806 07:12:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.806 07:12:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.064 07:12:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.323 [2024-07-25 07:12:39.705765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.323 [2024-07-25 07:12:39.822078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.323 [2024-07-25 07:12:39.822083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.581 [2024-07-25 07:12:39.883973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.581 [2024-07-25 07:12:39.884053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.116 07:12:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2356564 /var/tmp/spdk-nbd.sock 00:07:10.116 07:12:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2356564 ']' 00:07:10.116 07:12:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.116 07:12:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.116 07:12:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.116 07:12:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.116 07:12:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:10.374 07:12:42 event.app_repeat -- event/event.sh@39 -- # killprocess 2356564 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2356564 ']' 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2356564 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2356564 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2356564' 00:07:10.374 killing process with pid 2356564 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2356564 00:07:10.374 07:12:42 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2356564 00:07:10.632 spdk_app_start is called in Round 0. 00:07:10.632 Shutdown signal received, stop current app iteration 00:07:10.632 Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 reinitialization... 00:07:10.632 spdk_app_start is called in Round 1. 00:07:10.632 Shutdown signal received, stop current app iteration 00:07:10.632 Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 reinitialization... 00:07:10.632 spdk_app_start is called in Round 2. 00:07:10.632 Shutdown signal received, stop current app iteration 00:07:10.632 Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 reinitialization... 00:07:10.632 spdk_app_start is called in Round 3. 00:07:10.632 Shutdown signal received, stop current app iteration 00:07:10.632 07:12:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.632 07:12:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.632 00:07:10.632 real 0m18.096s 00:07:10.632 user 0m39.129s 00:07:10.632 sys 0m3.259s 00:07:10.632 07:12:42 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.632 07:12:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.632 ************************************ 00:07:10.632 END TEST app_repeat 00:07:10.632 ************************************ 00:07:10.632 07:12:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.632 07:12:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.632 07:12:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.632 07:12:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.632 07:12:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.632 ************************************ 00:07:10.632 START TEST cpu_locks 00:07:10.632 ************************************ 00:07:10.632 07:12:43 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.632 * Looking for test storage... 00:07:10.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:10.632 07:12:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.632 07:12:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.632 07:12:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.632 07:12:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.632 07:12:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.632 07:12:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.632 07:12:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.632 ************************************ 00:07:10.632 START TEST default_locks 00:07:10.632 ************************************ 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2359035 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2359035 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2359035 ']' 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.632 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.633 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.633 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.633 [2024-07-25 07:12:43.124286] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:10.633 [2024-07-25 07:12:43.124380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359035 ] 00:07:10.633 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.891 [2024-07-25 07:12:43.182270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.891 [2024-07-25 07:12:43.287920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.149 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.149 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:11.149 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2359035 00:07:11.149 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2359035 00:07:11.149 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.405 lslocks: write error 00:07:11.405 07:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2359035 00:07:11.405 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2359035 ']' 00:07:11.405 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2359035 00:07:11.405 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:11.405 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.405 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359035 00:07:11.662 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.662 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.662 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359035' 00:07:11.662 killing process with pid 2359035 00:07:11.662 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2359035 00:07:11.662 07:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2359035 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2359035 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2359035 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2359035 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2359035 ']' 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2359035) - No such process 00:07:11.921 ERROR: process (pid: 2359035) is no longer running 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.921 00:07:11.921 real 0m1.327s 00:07:11.921 user 0m1.261s 00:07:11.921 sys 0m0.545s 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.921 07:12:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.921 ************************************ 00:07:11.921 END TEST default_locks 00:07:11.921 ************************************ 00:07:11.921 07:12:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.921 07:12:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.921 07:12:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.921 07:12:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.179 ************************************ 00:07:12.179 START TEST default_locks_via_rpc 00:07:12.179 ************************************ 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2359198 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2359198 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2359198 ']' 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.179 07:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.179 [2024-07-25 07:12:44.507910] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:12.179 [2024-07-25 07:12:44.508001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359198 ] 00:07:12.179 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.179 [2024-07-25 07:12:44.569492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.179 [2024-07-25 07:12:44.682504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2359198 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2359198 00:07:13.113 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.370 07:12:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2359198 00:07:13.370 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2359198 ']' 00:07:13.370 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2359198 00:07:13.370 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:13.370 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.370 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359198 00:07:13.371 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.371 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.371 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359198' 00:07:13.371 killing process with pid 2359198 00:07:13.371 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2359198 00:07:13.371 07:12:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2359198 00:07:13.937 00:07:13.937 real 0m1.817s 00:07:13.937 user 0m1.938s 00:07:13.937 sys 0m0.576s 00:07:13.937 07:12:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.937 07:12:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 ************************************ 00:07:13.937 END TEST default_locks_via_rpc 00:07:13.937 ************************************ 00:07:13.937 07:12:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:13.937 07:12:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.937 07:12:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.937 07:12:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 ************************************ 00:07:13.937 START TEST non_locking_app_on_locked_coremask 00:07:13.937 ************************************ 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2359410 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2359410 /var/tmp/spdk.sock 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2359410 ']' 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.937 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 [2024-07-25 07:12:46.373821] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:13.937 [2024-07-25 07:12:46.373914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359410 ] 00:07:13.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.937 [2024-07-25 07:12:46.432947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.195 [2024-07-25 07:12:46.540317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2359496 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2359496 /var/tmp/spdk2.sock 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2359496 ']' 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.453 07:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.453 [2024-07-25 07:12:46.848859] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:14.453 [2024-07-25 07:12:46.848950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359496 ] 00:07:14.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.453 [2024-07-25 07:12:46.939412] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.454 [2024-07-25 07:12:46.939444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.712 [2024-07-25 07:12:47.177384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.278 07:12:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.278 07:12:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.278 07:12:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2359410 00:07:15.278 07:12:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2359410 00:07:15.278 07:12:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.844 lslocks: write error 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2359410 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2359410 ']' 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2359410 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359410 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359410' 00:07:15.844 killing process with pid 2359410 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2359410 00:07:15.844 07:12:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2359410 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2359496 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2359496 ']' 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2359496 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359496 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359496' 00:07:16.776 killing process with pid 2359496 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2359496 00:07:16.776 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2359496 00:07:17.342 00:07:17.342 real 0m3.273s 00:07:17.342 user 0m3.437s 00:07:17.342 sys 0m1.000s 00:07:17.342 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.342 07:12:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.342 ************************************ 00:07:17.342 END TEST non_locking_app_on_locked_coremask 00:07:17.342 ************************************ 00:07:17.342 07:12:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:17.342 07:12:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.342 07:12:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.342 07:12:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.342 ************************************ 00:07:17.342 START TEST locking_app_on_unlocked_coremask 00:07:17.342 ************************************ 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2359806 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2359806 /var/tmp/spdk.sock 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2359806 ']' 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.342 07:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.342 [2024-07-25 07:12:49.697291] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:17.342 [2024-07-25 07:12:49.697388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359806 ] 00:07:17.342 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.342 [2024-07-25 07:12:49.755974] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.342 [2024-07-25 07:12:49.756010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.342 [2024-07-25 07:12:49.863949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2359936 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2359936 /var/tmp/spdk2.sock 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2359936 ']' 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.600 07:12:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.858 [2024-07-25 07:12:50.167876] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:17.858 [2024-07-25 07:12:50.167965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359936 ] 00:07:17.858 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.858 [2024-07-25 07:12:50.259606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.117 [2024-07-25 07:12:50.497526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.682 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.682 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:18.682 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2359936 00:07:18.682 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2359936 00:07:18.682 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.939 lslocks: write error 00:07:18.939 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2359806 00:07:18.939 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2359806 ']' 00:07:18.939 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2359806 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359806 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359806' 00:07:19.197 killing process with pid 2359806 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2359806 00:07:19.197 07:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2359806 00:07:20.128 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2359936 00:07:20.128 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2359936 ']' 00:07:20.128 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2359936 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359936 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359936' 00:07:20.129 killing process with pid 2359936 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2359936 00:07:20.129 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2359936 00:07:20.694 00:07:20.694 real 0m3.273s 00:07:20.694 user 0m3.409s 00:07:20.694 sys 0m1.026s 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.694 ************************************ 00:07:20.694 END TEST locking_app_on_unlocked_coremask 00:07:20.694 ************************************ 00:07:20.694 07:12:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:20.694 07:12:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.694 07:12:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.694 07:12:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.694 ************************************ 00:07:20.694 START TEST locking_app_on_locked_coremask 00:07:20.694 ************************************ 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2360242 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2360242 /var/tmp/spdk.sock 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2360242 ']' 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.694 07:12:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.694 [2024-07-25 07:12:53.013476] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:20.694 [2024-07-25 07:12:53.013558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360242 ] 00:07:20.694 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.694 [2024-07-25 07:12:53.075541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.694 [2024-07-25 07:12:53.199108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2360370 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2360370 /var/tmp/spdk2.sock 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2360370 /var/tmp/spdk2.sock 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2360370 /var/tmp/spdk2.sock 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2360370 ']' 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.953 07:12:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.226 [2024-07-25 07:12:53.518989] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:21.226 [2024-07-25 07:12:53.519076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360370 ] 00:07:21.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.226 [2024-07-25 07:12:53.615374] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2360242 has claimed it. 00:07:21.226 [2024-07-25 07:12:53.615435] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2360370) - No such process 00:07:21.805 ERROR: process (pid: 2360370) is no longer running 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2360242 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2360242 00:07:21.805 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.062 lslocks: write error 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2360242 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2360242 ']' 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2360242 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2360242 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2360242' 00:07:22.062 killing process with pid 2360242 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2360242 00:07:22.062 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2360242 00:07:22.626 00:07:22.626 real 0m1.968s 00:07:22.626 user 0m2.111s 00:07:22.626 sys 0m0.609s 00:07:22.626 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.626 07:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.626 ************************************ 00:07:22.626 END TEST locking_app_on_locked_coremask 00:07:22.626 ************************************ 00:07:22.626 07:12:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:22.626 07:12:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.626 07:12:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.626 07:12:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.626 ************************************ 00:07:22.626 START TEST locking_overlapped_coremask 00:07:22.626 ************************************ 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2360541 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2360541 /var/tmp/spdk.sock 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2360541 ']' 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.626 07:12:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.626 [2024-07-25 07:12:55.036554] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:22.626 [2024-07-25 07:12:55.036673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360541 ] 00:07:22.626 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.626 [2024-07-25 07:12:55.099924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.884 [2024-07-25 07:12:55.214999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.884 [2024-07-25 07:12:55.215050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.884 [2024-07-25 07:12:55.215068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2360680 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2360680 /var/tmp/spdk2.sock 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2360680 /var/tmp/spdk2.sock 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2360680 /var/tmp/spdk2.sock 00:07:23.447 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2360680 ']' 00:07:23.448 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.448 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.448 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.448 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.448 07:12:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.704 [2024-07-25 07:12:56.015657] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:23.704 [2024-07-25 07:12:56.015742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360680 ] 00:07:23.704 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.704 [2024-07-25 07:12:56.101837] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2360541 has claimed it. 00:07:23.704 [2024-07-25 07:12:56.101904] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2360680) - No such process 00:07:24.269 ERROR: process (pid: 2360680) is no longer running 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2360541 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2360541 ']' 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2360541 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2360541 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2360541' 00:07:24.269 killing process with pid 2360541 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2360541 00:07:24.269 07:12:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2360541 00:07:24.834 00:07:24.834 real 0m2.212s 00:07:24.834 user 0m6.197s 00:07:24.834 sys 0m0.461s 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.834 ************************************ 00:07:24.834 END TEST locking_overlapped_coremask 00:07:24.834 ************************************ 00:07:24.834 07:12:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:24.834 07:12:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.834 07:12:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.834 07:12:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.834 ************************************ 00:07:24.834 START TEST locking_overlapped_coremask_via_rpc 00:07:24.834 ************************************ 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2360844 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2360844 /var/tmp/spdk.sock 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2360844 ']' 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.834 07:12:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.834 [2024-07-25 07:12:57.299695] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:24.834 [2024-07-25 07:12:57.299780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360844 ] 00:07:24.834 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.834 [2024-07-25 07:12:57.361305] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.834 [2024-07-25 07:12:57.361339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.092 [2024-07-25 07:12:57.478015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.092 [2024-07-25 07:12:57.478085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.092 [2024-07-25 07:12:57.478088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2360982 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2360982 /var/tmp/spdk2.sock 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2360982 ']' 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.025 07:12:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 [2024-07-25 07:12:58.273541] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:26.025 [2024-07-25 07:12:58.273630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360982 ] 00:07:26.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.025 [2024-07-25 07:12:58.362161] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.025 [2024-07-25 07:12:58.362198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.282 [2024-07-25 07:12:58.586265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.282 [2024-07-25 07:12:58.586324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:26.282 [2024-07-25 07:12:58.586326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.847 [2024-07-25 07:12:59.201340] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2360844 has claimed it. 00:07:26.847 request: 00:07:26.847 { 00:07:26.847 "method": "framework_enable_cpumask_locks", 00:07:26.847 "req_id": 1 00:07:26.847 } 00:07:26.847 Got JSON-RPC error response 00:07:26.847 response: 00:07:26.847 { 00:07:26.847 "code": -32603, 00:07:26.847 "message": "Failed to claim CPU core: 2" 00:07:26.847 } 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2360844 /var/tmp/spdk.sock 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2360844 ']' 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.847 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2360982 /var/tmp/spdk2.sock 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2360982 ']' 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.140 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:27.398 00:07:27.398 real 0m2.463s 00:07:27.398 user 0m1.183s 00:07:27.398 sys 0m0.212s 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.398 07:12:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.398 ************************************ 00:07:27.398 END TEST locking_overlapped_coremask_via_rpc 00:07:27.398 ************************************ 00:07:27.398 07:12:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:27.398 07:12:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2360844 ]] 00:07:27.398 07:12:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2360844 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2360844 ']' 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2360844 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2360844 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2360844' 00:07:27.398 killing process with pid 2360844 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2360844 00:07:27.398 07:12:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2360844 00:07:27.963 07:13:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2360982 ]] 00:07:27.963 07:13:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2360982 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2360982 ']' 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2360982 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2360982 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2360982' 00:07:27.963 killing process with pid 2360982 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2360982 00:07:27.963 07:13:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2360982 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2360844 ]] 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2360844 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2360844 ']' 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2360844 00:07:28.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2360844) - No such process 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2360844 is not found' 00:07:28.221 Process with pid 2360844 is not found 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2360982 ]] 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2360982 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2360982 ']' 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2360982 00:07:28.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2360982) - No such process 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2360982 is not found' 00:07:28.221 Process with pid 2360982 is not found 00:07:28.221 07:13:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:28.221 00:07:28.221 real 0m17.689s 00:07:28.221 user 0m31.827s 00:07:28.221 sys 0m5.321s 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.221 07:13:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.221 ************************************ 00:07:28.221 END TEST cpu_locks 00:07:28.221 ************************************ 00:07:28.221 00:07:28.221 real 0m41.860s 00:07:28.221 user 1m20.021s 00:07:28.221 sys 0m9.401s 00:07:28.221 07:13:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.221 07:13:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.221 ************************************ 00:07:28.221 END TEST event 00:07:28.221 ************************************ 00:07:28.221 07:13:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:28.221 07:13:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.221 07:13:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.221 07:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:28.479 ************************************ 00:07:28.479 START TEST thread 00:07:28.479 ************************************ 00:07:28.479 07:13:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:28.479 * Looking for test storage... 00:07:28.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:28.479 07:13:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:28.479 07:13:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:28.479 07:13:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.479 07:13:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.479 ************************************ 00:07:28.479 START TEST thread_poller_perf 00:07:28.479 ************************************ 00:07:28.479 07:13:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:28.479 [2024-07-25 07:13:00.840988] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:28.479 [2024-07-25 07:13:00.841058] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361345 ] 00:07:28.479 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.479 [2024-07-25 07:13:00.898191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.736 [2024-07-25 07:13:01.011146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.736 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:29.671 ====================================== 00:07:29.671 busy:2708673584 (cyc) 00:07:29.671 total_run_count: 296000 00:07:29.671 tsc_hz: 2700000000 (cyc) 00:07:29.671 ====================================== 00:07:29.671 poller_cost: 9150 (cyc), 3388 (nsec) 00:07:29.671 00:07:29.671 real 0m1.309s 00:07:29.671 user 0m1.226s 00:07:29.671 sys 0m0.078s 00:07:29.671 07:13:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.671 07:13:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.671 ************************************ 00:07:29.671 END TEST thread_poller_perf 00:07:29.671 ************************************ 00:07:29.671 07:13:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.671 07:13:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:29.671 07:13:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.671 07:13:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.671 ************************************ 00:07:29.671 START TEST thread_poller_perf 00:07:29.671 ************************************ 00:07:29.671 07:13:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.671 [2024-07-25 07:13:02.195948] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:29.671 [2024-07-25 07:13:02.196017] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361591 ] 00:07:29.931 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.931 [2024-07-25 07:13:02.257464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.931 [2024-07-25 07:13:02.368214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.931 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:31.301 ====================================== 00:07:31.301 busy:2702592745 (cyc) 00:07:31.301 total_run_count: 3862000 00:07:31.301 tsc_hz: 2700000000 (cyc) 00:07:31.301 ====================================== 00:07:31.301 poller_cost: 699 (cyc), 258 (nsec) 00:07:31.301 00:07:31.301 real 0m1.302s 00:07:31.301 user 0m1.213s 00:07:31.301 sys 0m0.084s 00:07:31.301 07:13:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.301 07:13:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.301 ************************************ 00:07:31.301 END TEST thread_poller_perf 00:07:31.301 ************************************ 00:07:31.302 07:13:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:31.302 00:07:31.302 real 0m2.744s 00:07:31.302 user 0m2.497s 00:07:31.302 sys 0m0.248s 00:07:31.302 07:13:03 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.302 07:13:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.302 ************************************ 00:07:31.302 END TEST thread 00:07:31.302 ************************************ 00:07:31.302 07:13:03 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:31.302 07:13:03 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:31.302 07:13:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.302 07:13:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.302 07:13:03 -- common/autotest_common.sh@10 -- # set +x 00:07:31.302 ************************************ 00:07:31.302 START TEST app_cmdline 00:07:31.302 ************************************ 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:31.302 * Looking for test storage... 00:07:31.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:31.302 07:13:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:31.302 07:13:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2361821 00:07:31.302 07:13:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:31.302 07:13:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2361821 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2361821 ']' 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.302 07:13:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.302 [2024-07-25 07:13:03.654545] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:31.302 [2024-07-25 07:13:03.654656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361821 ] 00:07:31.302 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.302 [2024-07-25 07:13:03.712713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.302 [2024-07-25 07:13:03.817679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.560 07:13:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.560 07:13:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:31.560 07:13:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:31.817 { 00:07:31.817 "version": "SPDK v24.09-pre git sha1 e5ef9abc9", 00:07:31.817 "fields": { 00:07:31.817 "major": 24, 00:07:31.817 "minor": 9, 00:07:31.817 "patch": 0, 00:07:31.817 "suffix": "-pre", 00:07:31.817 "commit": "e5ef9abc9" 00:07:31.817 } 00:07:31.817 } 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:31.817 07:13:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.817 07:13:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:31.817 07:13:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:31.817 07:13:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.074 07:13:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:32.074 07:13:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:32.074 07:13:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.074 07:13:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:32.074 07:13:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.074 07:13:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.074 07:13:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:32.075 07:13:04 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.332 request: 00:07:32.332 { 00:07:32.332 "method": "env_dpdk_get_mem_stats", 00:07:32.332 "req_id": 1 00:07:32.332 } 00:07:32.332 Got JSON-RPC error response 00:07:32.332 response: 00:07:32.332 { 00:07:32.332 "code": -32601, 00:07:32.332 "message": "Method not found" 00:07:32.332 } 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.332 07:13:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2361821 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2361821 ']' 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2361821 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2361821 00:07:32.332 07:13:04 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.333 07:13:04 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.333 07:13:04 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2361821' 00:07:32.333 killing process with pid 2361821 00:07:32.333 07:13:04 app_cmdline -- common/autotest_common.sh@969 -- # kill 2361821 00:07:32.333 07:13:04 app_cmdline -- common/autotest_common.sh@974 -- # wait 2361821 00:07:32.627 00:07:32.627 real 0m1.580s 00:07:32.627 user 0m1.884s 00:07:32.627 sys 0m0.490s 00:07:32.627 07:13:05 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.627 07:13:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.627 ************************************ 00:07:32.627 END TEST app_cmdline 00:07:32.627 ************************************ 00:07:32.627 07:13:05 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:32.627 07:13:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.627 07:13:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.627 07:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.885 ************************************ 00:07:32.885 START TEST version 00:07:32.885 ************************************ 00:07:32.885 07:13:05 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:32.885 * Looking for test storage... 00:07:32.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.885 07:13:05 version -- app/version.sh@17 -- # get_header_version major 00:07:32.885 07:13:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # cut -f2 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.885 07:13:05 version -- app/version.sh@17 -- # major=24 00:07:32.885 07:13:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:32.885 07:13:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # cut -f2 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.885 07:13:05 version -- app/version.sh@18 -- # minor=9 00:07:32.885 07:13:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:32.885 07:13:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # cut -f2 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.885 07:13:05 version -- app/version.sh@19 -- # patch=0 00:07:32.885 07:13:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:32.885 07:13:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # cut -f2 00:07:32.885 07:13:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.885 07:13:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:32.885 07:13:05 version -- app/version.sh@22 -- # version=24.9 00:07:32.885 07:13:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:32.885 07:13:05 version -- app/version.sh@28 -- # version=24.9rc0 00:07:32.885 07:13:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.885 07:13:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:32.885 07:13:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:32.885 07:13:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:32.885 00:07:32.885 real 0m0.103s 00:07:32.885 user 0m0.055s 00:07:32.885 sys 0m0.070s 00:07:32.885 07:13:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.885 07:13:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:32.885 ************************************ 00:07:32.885 END TEST version 00:07:32.885 ************************************ 00:07:32.885 07:13:05 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@202 -- # uname -s 00:07:32.885 07:13:05 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:32.885 07:13:05 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:32.885 07:13:05 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:32.885 07:13:05 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:32.885 07:13:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.885 07:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.885 07:13:05 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:32.885 07:13:05 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:32.885 07:13:05 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:32.885 07:13:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.885 07:13:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.885 07:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:32.885 ************************************ 00:07:32.885 START TEST nvmf_tcp 00:07:32.885 ************************************ 00:07:32.885 07:13:05 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:32.885 * Looking for test storage... 00:07:32.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:32.885 07:13:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:32.885 07:13:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:32.885 07:13:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:32.885 07:13:05 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.885 07:13:05 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.885 07:13:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.143 ************************************ 00:07:33.143 START TEST nvmf_target_core 00:07:33.143 ************************************ 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:33.144 * Looking for test storage... 00:07:33.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.144 ************************************ 00:07:33.144 START TEST nvmf_abort 00:07:33.144 ************************************ 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:33.144 * Looking for test storage... 00:07:33.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.144 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.145 07:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:35.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.045 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:35.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.304 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:35.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:35.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:07:35.305 00:07:35.305 --- 10.0.0.2 ping statistics --- 00:07:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.305 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:35.305 00:07:35.305 --- 10.0.0.1 ping statistics --- 00:07:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.305 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2363757 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2363757 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2363757 ']' 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.305 07:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.305 [2024-07-25 07:13:07.788585] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:35.305 [2024-07-25 07:13:07.788700] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.564 [2024-07-25 07:13:07.859486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.564 [2024-07-25 07:13:07.980314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.564 [2024-07-25 07:13:07.980370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.564 [2024-07-25 07:13:07.980387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.564 [2024-07-25 07:13:07.980401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.564 [2024-07-25 07:13:07.980413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.564 [2024-07-25 07:13:07.980503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.564 [2024-07-25 07:13:07.980629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.564 [2024-07-25 07:13:07.980633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.497 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.497 [2024-07-25 07:13:08.756196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.498 Malloc0 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.498 Delay0 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.498 [2024-07-25 07:13:08.828649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.498 07:13:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.498 [2024-07-25 07:13:08.975354] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:39.025 Initializing NVMe Controllers 00:07:39.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:39.025 controller IO queue size 128 less than required 00:07:39.025 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:39.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:39.025 Initialization complete. Launching workers. 00:07:39.025 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33029 00:07:39.025 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33090, failed to submit 62 00:07:39.025 success 33033, unsuccess 57, failed 0 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.025 rmmod nvme_tcp 00:07:39.025 rmmod nvme_fabrics 00:07:39.025 rmmod nvme_keyring 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2363757 ']' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2363757 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2363757 ']' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2363757 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2363757 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2363757' 00:07:39.025 killing process with pid 2363757 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2363757 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2363757 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.025 07:13:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.925 00:07:40.925 real 0m7.920s 00:07:40.925 user 0m12.573s 00:07:40.925 sys 0m2.555s 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.925 ************************************ 00:07:40.925 END TEST nvmf_abort 00:07:40.925 ************************************ 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.925 07:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.183 ************************************ 00:07:41.183 START TEST nvmf_ns_hotplug_stress 00:07:41.183 ************************************ 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:41.183 * Looking for test storage... 00:07:41.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:41.183 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.184 07:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:43.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:43.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.084 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:43.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:43.085 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.085 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:07:43.343 00:07:43.343 --- 10.0.0.2 ping statistics --- 00:07:43.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.343 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:07:43.343 00:07:43.343 --- 10.0.0.1 ping statistics --- 00:07:43.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.343 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2366108 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2366108 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2366108 ']' 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.343 07:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.343 [2024-07-25 07:13:15.719134] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:07:43.343 [2024-07-25 07:13:15.719210] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.343 [2024-07-25 07:13:15.790746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.600 [2024-07-25 07:13:15.911874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.600 [2024-07-25 07:13:15.911931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.600 [2024-07-25 07:13:15.911948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.600 [2024-07-25 07:13:15.911962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.600 [2024-07-25 07:13:15.911974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.600 [2024-07-25 07:13:15.912071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.600 [2024-07-25 07:13:15.914260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.600 [2024-07-25 07:13:15.914272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:43.600 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:43.857 [2024-07-25 07:13:16.268865] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.857 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:44.115 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.373 [2024-07-25 07:13:16.763817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.373 07:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.631 07:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:44.889 Malloc0 00:07:44.889 07:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:45.147 Delay0 00:07:45.147 07:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.405 07:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:45.663 NULL1 00:07:45.663 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:45.921 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2366522 00:07:45.921 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:45.921 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:45.921 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.179 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.437 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:46.437 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:46.695 true 00:07:46.695 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:46.695 07:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.953 07:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.212 07:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:47.212 07:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:47.212 true 00:07:47.212 07:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:47.212 07:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.587 Read completed with error (sct=0, sc=11) 00:07:48.587 07:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.587 07:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:48.587 07:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:48.845 true 00:07:48.845 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:48.845 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.110 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.369 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:49.369 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:49.626 true 00:07:49.626 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:49.626 07:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.619 07:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.619 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:50.619 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:50.877 true 00:07:50.877 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:50.877 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.133 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.391 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:51.391 07:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:51.650 true 00:07:51.650 07:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:51.650 07:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.583 07:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.840 07:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:52.840 07:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:53.098 true 00:07:53.098 07:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:53.098 07:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.355 07:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.613 07:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:53.613 07:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:53.871 true 00:07:53.871 07:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:53.871 07:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.804 07:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.061 07:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:55.061 07:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:55.318 true 00:07:55.318 07:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:55.318 07:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.575 07:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.833 07:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:55.833 07:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:56.090 true 00:07:56.090 07:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:56.090 07:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.022 07:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.022 07:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:57.022 07:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:57.279 true 00:07:57.279 07:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:57.279 07:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.536 07:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.793 07:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:57.793 07:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:58.051 true 00:07:58.051 07:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:58.051 07:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.982 07:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.239 07:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:59.239 07:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:59.239 true 00:07:59.239 07:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:07:59.517 07:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.517 07:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.774 07:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:59.774 07:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:00.032 true 00:08:00.032 07:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:00.032 07:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.966 07:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.224 07:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:01.224 07:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:01.481 true 00:08:01.481 07:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:01.481 07:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.739 07:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.996 07:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:01.996 07:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:02.254 true 00:08:02.254 07:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:02.254 07:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.187 07:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.445 07:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:03.445 07:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:03.703 true 00:08:03.703 07:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:03.703 07:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.636 07:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.636 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:04.636 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:04.894 true 00:08:04.894 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:04.894 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.152 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.410 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:05.410 07:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:05.667 true 00:08:05.667 07:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:05.667 07:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.600 07:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.858 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:06.858 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:07.116 true 00:08:07.116 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:07.116 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.373 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.373 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:07.373 07:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:07.631 true 00:08:07.631 07:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:07.631 07:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.564 07:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.822 07:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:08.822 07:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:09.080 true 00:08:09.080 07:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:09.080 07:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.338 07:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.596 07:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:09.596 07:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:09.854 true 00:08:09.854 07:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:09.854 07:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.789 07:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.047 07:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:11.047 07:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:11.304 true 00:08:11.304 07:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:11.304 07:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.561 07:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.819 07:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:11.819 07:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:11.819 true 00:08:11.819 07:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:11.819 07:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.192 07:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.192 07:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:13.192 07:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:13.449 true 00:08:13.449 07:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:13.449 07:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.381 07:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.381 07:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:14.381 07:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:14.639 true 00:08:14.639 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:14.639 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.896 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.154 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:15.154 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:15.411 true 00:08:15.411 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:15.411 07:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.344 Initializing NVMe Controllers 00:08:16.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.344 Controller IO queue size 128, less than required. 00:08:16.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.344 Controller IO queue size 128, less than required. 00:08:16.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:16.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:16.344 Initialization complete. Launching workers. 00:08:16.344 ======================================================== 00:08:16.344 Latency(us) 00:08:16.344 Device Information : IOPS MiB/s Average min max 00:08:16.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 990.00 0.48 72748.02 2967.89 1011967.02 00:08:16.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12058.17 5.89 10583.41 3266.91 452258.74 00:08:16.344 ======================================================== 00:08:16.344 Total : 13048.17 6.37 15300.01 2967.89 1011967.02 00:08:16.344 00:08:16.344 07:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.602 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:16.602 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:16.860 true 00:08:16.860 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2366522 00:08:16.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2366522) - No such process 00:08:16.860 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2366522 00:08:16.860 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.118 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.376 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:17.376 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:17.376 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:17.376 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.376 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:17.634 null0 00:08:17.634 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.634 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.634 07:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:17.891 null1 00:08:17.891 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.891 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.891 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:18.148 null2 00:08:18.148 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.148 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.148 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:18.406 null3 00:08:18.406 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.406 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.406 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:18.406 null4 00:08:18.406 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.406 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.406 07:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:18.689 null5 00:08:18.689 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.689 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.689 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:18.971 null6 00:08:18.971 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.971 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.971 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:19.229 null7 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.229 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2370596 2370597 2370599 2370601 2370603 2370605 2370607 2370609 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.230 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.488 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.488 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.488 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.488 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.488 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.488 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.489 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.489 07:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.747 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.006 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.265 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.524 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.524 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.524 07:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.524 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.524 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.524 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.524 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.783 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.783 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.783 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.783 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.040 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.040 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.040 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.041 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.299 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.558 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.559 07:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.817 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.818 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.076 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.335 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.592 07:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.849 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.107 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.365 07:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.623 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.624 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.624 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.624 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.624 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.624 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.881 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.140 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.399 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.399 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.399 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.399 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.657 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.657 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.657 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.657 07:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.915 rmmod nvme_tcp 00:08:24.915 rmmod nvme_fabrics 00:08:24.915 rmmod nvme_keyring 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2366108 ']' 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2366108 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2366108 ']' 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2366108 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2366108 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2366108' 00:08:24.915 killing process with pid 2366108 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2366108 00:08:24.915 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2366108 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.173 07:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.699 00:08:27.699 real 0m46.160s 00:08:27.699 user 3m29.995s 00:08:27.699 sys 0m16.477s 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 ************************************ 00:08:27.699 END TEST nvmf_ns_hotplug_stress 00:08:27.699 ************************************ 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 ************************************ 00:08:27.699 START TEST nvmf_delete_subsystem 00:08:27.699 ************************************ 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:27.699 * Looking for test storage... 00:08:27.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.699 07:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.601 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:29.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:08:29.602 00:08:29.602 --- 10.0.0.2 ping statistics --- 00:08:29.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.602 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:29.602 00:08:29.602 --- 10.0.0.1 ping statistics --- 00:08:29.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.602 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2373351 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2373351 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2373351 ']' 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.602 07:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.602 [2024-07-25 07:14:01.936182] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:29.602 [2024-07-25 07:14:01.936302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.602 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.602 [2024-07-25 07:14:02.000094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:29.602 [2024-07-25 07:14:02.112973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.602 [2024-07-25 07:14:02.113025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.602 [2024-07-25 07:14:02.113054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.602 [2024-07-25 07:14:02.113065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.602 [2024-07-25 07:14:02.113075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.602 [2024-07-25 07:14:02.113165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.602 [2024-07-25 07:14:02.113171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 [2024-07-25 07:14:02.262459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 [2024-07-25 07:14:02.278807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 NULL1 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.861 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.862 Delay0 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2373376 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:29.862 07:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:29.862 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.862 [2024-07-25 07:14:02.353417] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:32.390 07:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.390 07:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.390 07:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 Write completed with error (sct=0, sc=8) 00:08:32.390 starting I/O failed: -6 00:08:32.390 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 [2024-07-25 07:14:04.585073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58ec00d660 is same with the state(5) to be set 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 Read completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 Write completed with error (sct=0, sc=8) 00:08:32.391 starting I/O failed: -6 00:08:32.391 starting I/O failed: -6 00:08:32.391 starting I/O failed: -6 00:08:32.391 starting I/O failed: -6 00:08:32.391 starting I/O failed: -6 00:08:33.325 [2024-07-25 07:14:05.533312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16adac0 is same with the state(5) to be set 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 [2024-07-25 07:14:05.583399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58ec00d330 is same with the state(5) to be set 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Write completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.325 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 [2024-07-25 07:14:05.586750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac5c0 is same with the state(5) to be set 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 [2024-07-25 07:14:05.587000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16acc20 is same with the state(5) to be set 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 Write completed with error (sct=0, sc=8) 00:08:33.326 Read completed with error (sct=0, sc=8) 00:08:33.326 [2024-07-25 07:14:05.587999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac3e0 is same with the state(5) to be set 00:08:33.326 Initializing NVMe Controllers 00:08:33.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.326 Controller IO queue size 128, less than required. 00:08:33.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:33.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:33.326 Initialization complete. Launching workers. 00:08:33.326 ======================================================== 00:08:33.326 Latency(us) 00:08:33.326 Device Information : IOPS MiB/s Average min max 00:08:33.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.52 0.09 991619.78 710.71 2002318.37 00:08:33.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.33 0.07 979037.13 417.31 2001062.40 00:08:33.326 ======================================================== 00:08:33.326 Total : 330.85 0.16 986054.74 417.31 2002318.37 00:08:33.326 00:08:33.326 [2024-07-25 07:14:05.588876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16adac0 (9): Bad file descriptor 00:08:33.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:33.326 07:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.326 07:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:33.326 07:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2373376 00:08:33.326 07:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2373376 00:08:33.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2373376) - No such process 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2373376 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2373376 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2373376 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.585 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.585 [2024-07-25 07:14:06.113835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2373899 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.844 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:33.844 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.844 [2024-07-25 07:14:06.176184] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:34.102 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.102 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:34.102 07:14:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.668 07:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.668 07:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:34.668 07:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.235 07:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.235 07:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:35.235 07:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.802 07:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.802 07:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:35.802 07:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.367 07:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.367 07:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:36.367 07:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.625 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.625 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:36.625 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.883 Initializing NVMe Controllers 00:08:36.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:36.883 Controller IO queue size 128, less than required. 00:08:36.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:36.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:36.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:36.883 Initialization complete. Launching workers. 00:08:36.883 ======================================================== 00:08:36.883 Latency(us) 00:08:36.883 Device Information : IOPS MiB/s Average min max 00:08:36.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004359.54 1000234.58 1043385.28 00:08:36.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006603.47 1000294.35 1043914.63 00:08:36.883 ======================================================== 00:08:36.883 Total : 256.00 0.12 1005481.51 1000234.58 1043914.63 00:08:36.883 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2373899 00:08:37.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2373899) - No such process 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2373899 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.141 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.141 rmmod nvme_tcp 00:08:37.399 rmmod nvme_fabrics 00:08:37.399 rmmod nvme_keyring 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2373351 ']' 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2373351 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2373351 ']' 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2373351 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2373351 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2373351' 00:08:37.399 killing process with pid 2373351 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2373351 00:08:37.399 07:14:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2373351 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.657 07:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.554 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.554 00:08:39.554 real 0m12.375s 00:08:39.554 user 0m28.005s 00:08:39.554 sys 0m2.927s 00:08:39.554 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.554 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:39.554 ************************************ 00:08:39.554 END TEST nvmf_delete_subsystem 00:08:39.555 ************************************ 00:08:39.555 07:14:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:39.555 07:14:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.555 07:14:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.555 07:14:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.813 ************************************ 00:08:39.813 START TEST nvmf_host_management 00:08:39.813 ************************************ 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:39.813 * Looking for test storage... 00:08:39.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.813 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.814 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.814 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.814 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.814 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.814 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.814 07:14:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.713 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.714 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:08:41.973 00:08:41.973 --- 10.0.0.2 ping statistics --- 00:08:41.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.973 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:41.973 00:08:41.973 --- 10.0.0.1 ping statistics --- 00:08:41.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.973 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2376248 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2376248 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2376248 ']' 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.973 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.973 [2024-07-25 07:14:14.408123] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:41.973 [2024-07-25 07:14:14.408206] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.973 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.973 [2024-07-25 07:14:14.479228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.231 [2024-07-25 07:14:14.598169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.231 [2024-07-25 07:14:14.598236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.231 [2024-07-25 07:14:14.598262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.231 [2024-07-25 07:14:14.598277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.231 [2024-07-25 07:14:14.598289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.231 [2024-07-25 07:14:14.598378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.231 [2024-07-25 07:14:14.598495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.231 [2024-07-25 07:14:14.598536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:42.231 [2024-07-25 07:14:14.598539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.164 [2024-07-25 07:14:15.350729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.164 Malloc0 00:08:43.164 [2024-07-25 07:14:15.411716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2376424 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2376424 /var/tmp/bdevperf.sock 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2376424 ']' 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.164 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:43.165 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.165 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:43.165 { 00:08:43.165 "params": { 00:08:43.165 "name": "Nvme$subsystem", 00:08:43.165 "trtype": "$TEST_TRANSPORT", 00:08:43.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.165 "adrfam": "ipv4", 00:08:43.165 "trsvcid": "$NVMF_PORT", 00:08:43.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.165 "hdgst": ${hdgst:-false}, 00:08:43.165 "ddgst": ${ddgst:-false} 00:08:43.165 }, 00:08:43.165 "method": "bdev_nvme_attach_controller" 00:08:43.165 } 00:08:43.165 EOF 00:08:43.165 )") 00:08:43.165 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:43.165 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:43.165 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:43.165 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:43.165 "params": { 00:08:43.165 "name": "Nvme0", 00:08:43.165 "trtype": "tcp", 00:08:43.165 "traddr": "10.0.0.2", 00:08:43.165 "adrfam": "ipv4", 00:08:43.165 "trsvcid": "4420", 00:08:43.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:43.165 "hdgst": false, 00:08:43.165 "ddgst": false 00:08:43.165 }, 00:08:43.165 "method": "bdev_nvme_attach_controller" 00:08:43.165 }' 00:08:43.165 [2024-07-25 07:14:15.486716] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:43.165 [2024-07-25 07:14:15.486789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376424 ] 00:08:43.165 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.165 [2024-07-25 07:14:15.546749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.165 [2024-07-25 07:14:15.657007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.423 Running I/O for 10 seconds... 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.423 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.682 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.682 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:43.682 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:43.682 07:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.941 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 [2024-07-25 07:14:16.266584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:43.941 [2024-07-25 07:14:16.266642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:43.941 [2024-07-25 07:14:16.266683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:43.941 [2024-07-25 07:14:16.266710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:43.941 [2024-07-25 07:14:16.266737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1840 is same with the state(5) to be set 00:08:43.941 [2024-07-25 07:14:16.266846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.266866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.266907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.266938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.266966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.266981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.266995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.941 [2024-07-25 07:14:16.267305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.941 [2024-07-25 07:14:16.267319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.267986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.267999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.942 [2024-07-25 07:14:16.268440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.942 [2024-07-25 07:14:16.268453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.943 [2024-07-25 07:14:16.268752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.268834] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d2da0 was disconnected and freed. reset controller. 00:08:43.943 [2024-07-25 07:14:16.269990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:43.943 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.943 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:43.943 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.943 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.943 task offset: 71168 on job bdev=Nvme0n1 fails 00:08:43.943 00:08:43.943 Latency(us) 00:08:43.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.943 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:43.943 Job: Nvme0n1 ended in about 0.38 seconds with error 00:08:43.943 Verification LBA range: start 0x0 length 0x400 00:08:43.943 Nvme0n1 : 0.38 1340.23 83.76 167.53 0.00 41218.12 2706.39 37282.70 00:08:43.943 =================================================================================================================== 00:08:43.943 Total : 1340.23 83.76 167.53 0.00 41218.12 2706.39 37282.70 00:08:43.943 [2024-07-25 07:14:16.271862] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.943 [2024-07-25 07:14:16.271889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1840 (9): Bad file descriptor 00:08:43.943 [2024-07-25 07:14:16.274211] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:43.943 [2024-07-25 07:14:16.274460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:43.943 [2024-07-25 07:14:16.274489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.943 [2024-07-25 07:14:16.274516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:43.943 [2024-07-25 07:14:16.274547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:43.943 [2024-07-25 07:14:16.274561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:43.943 [2024-07-25 07:14:16.274573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfa1840 00:08:43.943 [2024-07-25 07:14:16.274606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1840 (9): Bad file descriptor 00:08:43.943 [2024-07-25 07:14:16.274631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:43.943 [2024-07-25 07:14:16.274645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:43.943 [2024-07-25 07:14:16.274661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:43.943 [2024-07-25 07:14:16.274681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:43.943 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.943 07:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2376424 00:08:44.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2376424) - No such process 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:44.877 { 00:08:44.877 "params": { 00:08:44.877 "name": "Nvme$subsystem", 00:08:44.877 "trtype": "$TEST_TRANSPORT", 00:08:44.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:44.877 "adrfam": "ipv4", 00:08:44.877 "trsvcid": "$NVMF_PORT", 00:08:44.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:44.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:44.877 "hdgst": ${hdgst:-false}, 00:08:44.877 "ddgst": ${ddgst:-false} 00:08:44.877 }, 00:08:44.877 "method": "bdev_nvme_attach_controller" 00:08:44.877 } 00:08:44.877 EOF 00:08:44.877 )") 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:44.877 07:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:44.877 "params": { 00:08:44.877 "name": "Nvme0", 00:08:44.877 "trtype": "tcp", 00:08:44.877 "traddr": "10.0.0.2", 00:08:44.877 "adrfam": "ipv4", 00:08:44.877 "trsvcid": "4420", 00:08:44.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:44.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:44.877 "hdgst": false, 00:08:44.877 "ddgst": false 00:08:44.877 }, 00:08:44.877 "method": "bdev_nvme_attach_controller" 00:08:44.877 }' 00:08:44.877 [2024-07-25 07:14:17.328051] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:44.877 [2024-07-25 07:14:17.328125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376695 ] 00:08:44.877 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.877 [2024-07-25 07:14:17.388575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.135 [2024-07-25 07:14:17.498488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.392 Running I/O for 1 seconds... 00:08:46.326 00:08:46.326 Latency(us) 00:08:46.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.326 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:46.326 Verification LBA range: start 0x0 length 0x400 00:08:46.326 Nvme0n1 : 1.01 1521.25 95.08 0.00 0.00 41411.48 7670.14 36117.62 00:08:46.326 =================================================================================================================== 00:08:46.326 Total : 1521.25 95.08 0.00 0.00 41411.48 7670.14 36117.62 00:08:46.584 07:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:46.584 07:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:46.584 07:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.584 rmmod nvme_tcp 00:08:46.584 rmmod nvme_fabrics 00:08:46.584 rmmod nvme_keyring 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2376248 ']' 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2376248 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2376248 ']' 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2376248 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2376248 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2376248' 00:08:46.584 killing process with pid 2376248 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2376248 00:08:46.584 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2376248 00:08:46.842 [2024-07-25 07:14:19.356416] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.101 07:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:49.010 00:08:49.010 real 0m9.332s 00:08:49.010 user 0m22.076s 00:08:49.010 sys 0m2.758s 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.010 ************************************ 00:08:49.010 END TEST nvmf_host_management 00:08:49.010 ************************************ 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.010 ************************************ 00:08:49.010 START TEST nvmf_lvol 00:08:49.010 ************************************ 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:49.010 * Looking for test storage... 00:08:49.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.010 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.269 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:51.169 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:51.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:51.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:51.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:08:51.169 00:08:51.169 --- 10.0.0.2 ping statistics --- 00:08:51.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.169 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:51.169 00:08:51.169 --- 10.0.0.1 ping statistics --- 00:08:51.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.169 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2378782 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:51.169 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2378782 00:08:51.170 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2378782 ']' 00:08:51.170 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.170 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.170 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.170 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.170 07:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.427 [2024-07-25 07:14:23.711899] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:08:51.427 [2024-07-25 07:14:23.711979] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.427 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.427 [2024-07-25 07:14:23.779512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.427 [2024-07-25 07:14:23.895201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.427 [2024-07-25 07:14:23.895260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.427 [2024-07-25 07:14:23.895287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.427 [2024-07-25 07:14:23.895301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.427 [2024-07-25 07:14:23.895314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.427 [2024-07-25 07:14:23.895376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.427 [2024-07-25 07:14:23.895448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.427 [2024-07-25 07:14:23.895451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.389 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:52.389 [2024-07-25 07:14:24.913920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.648 07:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.906 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:52.906 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.164 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:53.164 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:53.421 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:53.678 07:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=de767c51-af9e-497e-9386-fc190310c38f 00:08:53.678 07:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u de767c51-af9e-497e-9386-fc190310c38f lvol 20 00:08:53.935 07:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7c380dd5-c941-40bd-b2f7-f4bb0dd94888 00:08:53.935 07:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.193 07:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c380dd5-c941-40bd-b2f7-f4bb0dd94888 00:08:54.450 07:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.707 [2024-07-25 07:14:27.118998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.707 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.964 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2379340 00:08:54.964 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:54.964 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:54.964 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.898 07:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7c380dd5-c941-40bd-b2f7-f4bb0dd94888 MY_SNAPSHOT 00:08:56.463 07:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=132e137a-4f72-4c53-9f9f-fc198275c01f 00:08:56.463 07:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7c380dd5-c941-40bd-b2f7-f4bb0dd94888 30 00:08:56.721 07:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 132e137a-4f72-4c53-9f9f-fc198275c01f MY_CLONE 00:08:56.978 07:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=62ca3dad-9656-4f8b-b21d-171a5bdf41d4 00:08:56.978 07:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 62ca3dad-9656-4f8b-b21d-171a5bdf41d4 00:08:57.912 07:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2379340 00:09:06.021 Initializing NVMe Controllers 00:09:06.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:06.021 Controller IO queue size 128, less than required. 00:09:06.021 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:06.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:06.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:06.021 Initialization complete. Launching workers. 00:09:06.021 ======================================================== 00:09:06.021 Latency(us) 00:09:06.021 Device Information : IOPS MiB/s Average min max 00:09:06.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10344.10 40.41 12377.28 2086.57 137911.89 00:09:06.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10736.50 41.94 11921.89 2135.74 60503.78 00:09:06.021 ======================================================== 00:09:06.021 Total : 21080.60 82.35 12145.35 2086.57 137911.89 00:09:06.021 00:09:06.021 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.021 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c380dd5-c941-40bd-b2f7-f4bb0dd94888 00:09:06.021 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u de767c51-af9e-497e-9386-fc190310c38f 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.279 rmmod nvme_tcp 00:09:06.279 rmmod nvme_fabrics 00:09:06.279 rmmod nvme_keyring 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2378782 ']' 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2378782 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2378782 ']' 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2378782 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2378782 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2378782' 00:09:06.279 killing process with pid 2378782 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2378782 00:09:06.279 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2378782 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.538 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.069 00:09:09.069 real 0m19.538s 00:09:09.069 user 1m6.652s 00:09:09.069 sys 0m5.677s 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.069 ************************************ 00:09:09.069 END TEST nvmf_lvol 00:09:09.069 ************************************ 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.069 ************************************ 00:09:09.069 START TEST nvmf_lvs_grow 00:09:09.069 ************************************ 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.069 * Looking for test storage... 00:09:09.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.069 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.070 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.974 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:10.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:10.975 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:10.975 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:10.975 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:09:10.975 00:09:10.975 --- 10.0.0.2 ping statistics --- 00:09:10.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.975 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:09:10.975 00:09:10.975 --- 10.0.0.1 ping statistics --- 00:09:10.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.975 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2382608 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:10.975 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2382608 00:09:10.976 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2382608 ']' 00:09:10.976 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.976 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.976 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.976 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.976 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.976 [2024-07-25 07:14:43.442806] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:10.976 [2024-07-25 07:14:43.442892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.976 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.237 [2024-07-25 07:14:43.508175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.237 [2024-07-25 07:14:43.616695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.237 [2024-07-25 07:14:43.616754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.237 [2024-07-25 07:14:43.616767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.237 [2024-07-25 07:14:43.616779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.237 [2024-07-25 07:14:43.616788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.237 [2024-07-25 07:14:43.616813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.237 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.805 [2024-07-25 07:14:44.033260] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 ************************************ 00:09:11.805 START TEST lvs_grow_clean 00:09:11.805 ************************************ 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:11.805 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.063 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:12.063 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:12.321 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:12.321 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:12.321 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:12.579 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:12.579 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:12.579 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 lvol 150 00:09:12.838 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2d0067c2-9c5c-46de-82e2-ff6bd18ebe91 00:09:12.838 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.838 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:13.134 [2024-07-25 07:14:45.431703] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:13.134 [2024-07-25 07:14:45.431777] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:13.134 true 00:09:13.134 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:13.134 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:13.392 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:13.392 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.650 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d0067c2-9c5c-46de-82e2-ff6bd18ebe91 00:09:13.908 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:13.908 [2024-07-25 07:14:46.434870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.166 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2383049 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2383049 /var/tmp/bdevperf.sock 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2383049 ']' 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.425 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 [2024-07-25 07:14:46.739901] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:14.425 [2024-07-25 07:14:46.739975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383049 ] 00:09:14.425 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.425 [2024-07-25 07:14:46.802192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.425 [2024-07-25 07:14:46.920729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.683 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.683 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:14.683 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.249 Nvme0n1 00:09:15.249 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.249 [ 00:09:15.249 { 00:09:15.249 "name": "Nvme0n1", 00:09:15.249 "aliases": [ 00:09:15.249 "2d0067c2-9c5c-46de-82e2-ff6bd18ebe91" 00:09:15.249 ], 00:09:15.249 "product_name": "NVMe disk", 00:09:15.249 "block_size": 4096, 00:09:15.249 "num_blocks": 38912, 00:09:15.249 "uuid": "2d0067c2-9c5c-46de-82e2-ff6bd18ebe91", 00:09:15.249 "assigned_rate_limits": { 00:09:15.249 "rw_ios_per_sec": 0, 00:09:15.249 "rw_mbytes_per_sec": 0, 00:09:15.249 "r_mbytes_per_sec": 0, 00:09:15.249 "w_mbytes_per_sec": 0 00:09:15.249 }, 00:09:15.249 "claimed": false, 00:09:15.249 "zoned": false, 00:09:15.249 "supported_io_types": { 00:09:15.249 "read": true, 00:09:15.249 "write": true, 00:09:15.249 "unmap": true, 00:09:15.249 "flush": true, 00:09:15.249 "reset": true, 00:09:15.249 "nvme_admin": true, 00:09:15.249 "nvme_io": true, 00:09:15.249 "nvme_io_md": false, 00:09:15.249 "write_zeroes": true, 00:09:15.249 "zcopy": false, 00:09:15.249 "get_zone_info": false, 00:09:15.249 "zone_management": false, 00:09:15.249 "zone_append": false, 00:09:15.249 "compare": true, 00:09:15.249 "compare_and_write": true, 00:09:15.249 "abort": true, 00:09:15.249 "seek_hole": false, 00:09:15.249 "seek_data": false, 00:09:15.249 "copy": true, 00:09:15.249 "nvme_iov_md": false 00:09:15.249 }, 00:09:15.249 "memory_domains": [ 00:09:15.249 { 00:09:15.249 "dma_device_id": "system", 00:09:15.249 "dma_device_type": 1 00:09:15.249 } 00:09:15.249 ], 00:09:15.249 "driver_specific": { 00:09:15.249 "nvme": [ 00:09:15.249 { 00:09:15.249 "trid": { 00:09:15.249 "trtype": "TCP", 00:09:15.249 "adrfam": "IPv4", 00:09:15.249 "traddr": "10.0.0.2", 00:09:15.249 "trsvcid": "4420", 00:09:15.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.249 }, 00:09:15.249 "ctrlr_data": { 00:09:15.249 "cntlid": 1, 00:09:15.249 "vendor_id": "0x8086", 00:09:15.249 "model_number": "SPDK bdev Controller", 00:09:15.249 "serial_number": "SPDK0", 00:09:15.249 "firmware_revision": "24.09", 00:09:15.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.249 "oacs": { 00:09:15.249 "security": 0, 00:09:15.249 "format": 0, 00:09:15.249 "firmware": 0, 00:09:15.249 "ns_manage": 0 00:09:15.250 }, 00:09:15.250 "multi_ctrlr": true, 00:09:15.250 "ana_reporting": false 00:09:15.250 }, 00:09:15.250 "vs": { 00:09:15.250 "nvme_version": "1.3" 00:09:15.250 }, 00:09:15.250 "ns_data": { 00:09:15.250 "id": 1, 00:09:15.250 "can_share": true 00:09:15.250 } 00:09:15.250 } 00:09:15.250 ], 00:09:15.250 "mp_policy": "active_passive" 00:09:15.250 } 00:09:15.250 } 00:09:15.250 ] 00:09:15.508 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2383185 00:09:15.508 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.508 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.508 Running I/O for 10 seconds... 00:09:16.441 Latency(us) 00:09:16.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.441 Nvme0n1 : 1.00 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:09:16.441 =================================================================================================================== 00:09:16.441 Total : 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:09:16.441 00:09:17.375 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:17.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.633 Nvme0n1 : 2.00 14427.00 56.36 0.00 0.00 0.00 0.00 0.00 00:09:17.633 =================================================================================================================== 00:09:17.633 Total : 14427.00 56.36 0.00 0.00 0.00 0.00 0.00 00:09:17.633 00:09:17.633 true 00:09:17.633 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:17.633 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:17.891 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:17.891 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:17.891 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2383185 00:09:18.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.456 Nvme0n1 : 3.00 14470.67 56.53 0.00 0.00 0.00 0.00 0.00 00:09:18.456 =================================================================================================================== 00:09:18.456 Total : 14470.67 56.53 0.00 0.00 0.00 0.00 0.00 00:09:18.456 00:09:19.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.389 Nvme0n1 : 4.00 14524.25 56.74 0.00 0.00 0.00 0.00 0.00 00:09:19.389 =================================================================================================================== 00:09:19.389 Total : 14524.25 56.74 0.00 0.00 0.00 0.00 0.00 00:09:19.389 00:09:20.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.762 Nvme0n1 : 5.00 14555.20 56.86 0.00 0.00 0.00 0.00 0.00 00:09:20.762 =================================================================================================================== 00:09:20.762 Total : 14555.20 56.86 0.00 0.00 0.00 0.00 0.00 00:09:20.762 00:09:21.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.703 Nvme0n1 : 6.00 14597.67 57.02 0.00 0.00 0.00 0.00 0.00 00:09:21.703 =================================================================================================================== 00:09:21.703 Total : 14597.67 57.02 0.00 0.00 0.00 0.00 0.00 00:09:21.703 00:09:22.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.635 Nvme0n1 : 7.00 14654.43 57.24 0.00 0.00 0.00 0.00 0.00 00:09:22.635 =================================================================================================================== 00:09:22.635 Total : 14654.43 57.24 0.00 0.00 0.00 0.00 0.00 00:09:22.635 00:09:23.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.567 Nvme0n1 : 8.00 14704.75 57.44 0.00 0.00 0.00 0.00 0.00 00:09:23.567 =================================================================================================================== 00:09:23.567 Total : 14704.75 57.44 0.00 0.00 0.00 0.00 0.00 00:09:23.567 00:09:24.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.499 Nvme0n1 : 9.00 14730.11 57.54 0.00 0.00 0.00 0.00 0.00 00:09:24.499 =================================================================================================================== 00:09:24.499 Total : 14730.11 57.54 0.00 0.00 0.00 0.00 0.00 00:09:24.499 00:09:25.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.431 Nvme0n1 : 10.00 14770.80 57.70 0.00 0.00 0.00 0.00 0.00 00:09:25.431 =================================================================================================================== 00:09:25.431 Total : 14770.80 57.70 0.00 0.00 0.00 0.00 0.00 00:09:25.431 00:09:25.431 00:09:25.431 Latency(us) 00:09:25.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.432 Nvme0n1 : 10.01 14774.25 57.71 0.00 0.00 8658.94 3276.80 16699.54 00:09:25.432 =================================================================================================================== 00:09:25.432 Total : 14774.25 57.71 0.00 0.00 8658.94 3276.80 16699.54 00:09:25.432 0 00:09:25.432 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2383049 00:09:25.432 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2383049 ']' 00:09:25.432 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2383049 00:09:25.432 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:25.432 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.432 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2383049 00:09:25.690 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:25.690 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:25.690 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2383049' 00:09:25.690 killing process with pid 2383049 00:09:25.690 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2383049 00:09:25.690 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.690 00:09:25.690 Latency(us) 00:09:25.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.690 =================================================================================================================== 00:09:25.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.690 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2383049 00:09:25.947 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.205 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.463 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:26.463 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:26.721 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:26.721 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:26.721 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.979 [2024-07-25 07:14:59.333025] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.979 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:27.237 request: 00:09:27.237 { 00:09:27.237 "uuid": "ff8c836e-2784-4ea5-aaaf-5b638a8a28a2", 00:09:27.237 "method": "bdev_lvol_get_lvstores", 00:09:27.237 "req_id": 1 00:09:27.237 } 00:09:27.237 Got JSON-RPC error response 00:09:27.237 response: 00:09:27.237 { 00:09:27.237 "code": -19, 00:09:27.237 "message": "No such device" 00:09:27.237 } 00:09:27.237 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:27.237 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.237 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.237 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.237 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.495 aio_bdev 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2d0067c2-9c5c-46de-82e2-ff6bd18ebe91 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2d0067c2-9c5c-46de-82e2-ff6bd18ebe91 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.495 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.753 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2d0067c2-9c5c-46de-82e2-ff6bd18ebe91 -t 2000 00:09:28.011 [ 00:09:28.011 { 00:09:28.011 "name": "2d0067c2-9c5c-46de-82e2-ff6bd18ebe91", 00:09:28.011 "aliases": [ 00:09:28.011 "lvs/lvol" 00:09:28.011 ], 00:09:28.011 "product_name": "Logical Volume", 00:09:28.011 "block_size": 4096, 00:09:28.011 "num_blocks": 38912, 00:09:28.011 "uuid": "2d0067c2-9c5c-46de-82e2-ff6bd18ebe91", 00:09:28.011 "assigned_rate_limits": { 00:09:28.011 "rw_ios_per_sec": 0, 00:09:28.011 "rw_mbytes_per_sec": 0, 00:09:28.011 "r_mbytes_per_sec": 0, 00:09:28.011 "w_mbytes_per_sec": 0 00:09:28.011 }, 00:09:28.011 "claimed": false, 00:09:28.011 "zoned": false, 00:09:28.011 "supported_io_types": { 00:09:28.011 "read": true, 00:09:28.011 "write": true, 00:09:28.011 "unmap": true, 00:09:28.011 "flush": false, 00:09:28.011 "reset": true, 00:09:28.011 "nvme_admin": false, 00:09:28.011 "nvme_io": false, 00:09:28.011 "nvme_io_md": false, 00:09:28.011 "write_zeroes": true, 00:09:28.011 "zcopy": false, 00:09:28.011 "get_zone_info": false, 00:09:28.011 "zone_management": false, 00:09:28.011 "zone_append": false, 00:09:28.011 "compare": false, 00:09:28.012 "compare_and_write": false, 00:09:28.012 "abort": false, 00:09:28.012 "seek_hole": true, 00:09:28.012 "seek_data": true, 00:09:28.012 "copy": false, 00:09:28.012 "nvme_iov_md": false 00:09:28.012 }, 00:09:28.012 "driver_specific": { 00:09:28.012 "lvol": { 00:09:28.012 "lvol_store_uuid": "ff8c836e-2784-4ea5-aaaf-5b638a8a28a2", 00:09:28.012 "base_bdev": "aio_bdev", 00:09:28.012 "thin_provision": false, 00:09:28.012 "num_allocated_clusters": 38, 00:09:28.012 "snapshot": false, 00:09:28.012 "clone": false, 00:09:28.012 "esnap_clone": false 00:09:28.012 } 00:09:28.012 } 00:09:28.012 } 00:09:28.012 ] 00:09:28.012 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:28.012 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:28.012 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:28.270 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:28.270 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:28.270 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.530 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.530 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2d0067c2-9c5c-46de-82e2-ff6bd18ebe91 00:09:28.824 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff8c836e-2784-4ea5-aaaf-5b638a8a28a2 00:09:29.082 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.340 00:09:29.340 real 0m17.582s 00:09:29.340 user 0m17.061s 00:09:29.340 sys 0m1.931s 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:29.340 ************************************ 00:09:29.340 END TEST lvs_grow_clean 00:09:29.340 ************************************ 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:29.340 ************************************ 00:09:29.340 START TEST lvs_grow_dirty 00:09:29.340 ************************************ 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.340 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.599 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.599 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.858 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:29.858 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:29.858 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:30.116 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:30.116 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:30.116 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 595f72ce-82db-4b2a-a642-4f392ae0161e lvol 150 00:09:30.373 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0766a39c-ed61-4471-8faf-31faedf00baa 00:09:30.373 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.373 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:30.631 [2024-07-25 07:15:02.971421] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:30.631 [2024-07-25 07:15:02.971499] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:30.631 true 00:09:30.631 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:30.631 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.889 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.889 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:31.152 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0766a39c-ed61-4471-8faf-31faedf00baa 00:09:31.412 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:31.670 [2024-07-25 07:15:03.982499] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.670 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2385228 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2385228 /var/tmp/bdevperf.sock 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2385228 ']' 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.928 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.928 [2024-07-25 07:15:04.288145] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:31.928 [2024-07-25 07:15:04.288220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385228 ] 00:09:31.928 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.928 [2024-07-25 07:15:04.349872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.186 [2024-07-25 07:15:04.469087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.186 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.186 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:32.186 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:32.751 Nvme0n1 00:09:32.751 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:33.010 [ 00:09:33.010 { 00:09:33.010 "name": "Nvme0n1", 00:09:33.010 "aliases": [ 00:09:33.010 "0766a39c-ed61-4471-8faf-31faedf00baa" 00:09:33.010 ], 00:09:33.010 "product_name": "NVMe disk", 00:09:33.010 "block_size": 4096, 00:09:33.010 "num_blocks": 38912, 00:09:33.010 "uuid": "0766a39c-ed61-4471-8faf-31faedf00baa", 00:09:33.010 "assigned_rate_limits": { 00:09:33.010 "rw_ios_per_sec": 0, 00:09:33.010 "rw_mbytes_per_sec": 0, 00:09:33.010 "r_mbytes_per_sec": 0, 00:09:33.010 "w_mbytes_per_sec": 0 00:09:33.010 }, 00:09:33.010 "claimed": false, 00:09:33.010 "zoned": false, 00:09:33.010 "supported_io_types": { 00:09:33.010 "read": true, 00:09:33.010 "write": true, 00:09:33.010 "unmap": true, 00:09:33.010 "flush": true, 00:09:33.010 "reset": true, 00:09:33.010 "nvme_admin": true, 00:09:33.010 "nvme_io": true, 00:09:33.010 "nvme_io_md": false, 00:09:33.010 "write_zeroes": true, 00:09:33.010 "zcopy": false, 00:09:33.010 "get_zone_info": false, 00:09:33.010 "zone_management": false, 00:09:33.010 "zone_append": false, 00:09:33.010 "compare": true, 00:09:33.010 "compare_and_write": true, 00:09:33.010 "abort": true, 00:09:33.010 "seek_hole": false, 00:09:33.010 "seek_data": false, 00:09:33.010 "copy": true, 00:09:33.010 "nvme_iov_md": false 00:09:33.010 }, 00:09:33.010 "memory_domains": [ 00:09:33.010 { 00:09:33.010 "dma_device_id": "system", 00:09:33.010 "dma_device_type": 1 00:09:33.010 } 00:09:33.010 ], 00:09:33.010 "driver_specific": { 00:09:33.010 "nvme": [ 00:09:33.010 { 00:09:33.010 "trid": { 00:09:33.010 "trtype": "TCP", 00:09:33.010 "adrfam": "IPv4", 00:09:33.010 "traddr": "10.0.0.2", 00:09:33.010 "trsvcid": "4420", 00:09:33.010 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:33.010 }, 00:09:33.010 "ctrlr_data": { 00:09:33.010 "cntlid": 1, 00:09:33.010 "vendor_id": "0x8086", 00:09:33.010 "model_number": "SPDK bdev Controller", 00:09:33.010 "serial_number": "SPDK0", 00:09:33.010 "firmware_revision": "24.09", 00:09:33.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:33.010 "oacs": { 00:09:33.010 "security": 0, 00:09:33.010 "format": 0, 00:09:33.010 "firmware": 0, 00:09:33.010 "ns_manage": 0 00:09:33.010 }, 00:09:33.010 "multi_ctrlr": true, 00:09:33.010 "ana_reporting": false 00:09:33.010 }, 00:09:33.010 "vs": { 00:09:33.010 "nvme_version": "1.3" 00:09:33.010 }, 00:09:33.010 "ns_data": { 00:09:33.010 "id": 1, 00:09:33.010 "can_share": true 00:09:33.010 } 00:09:33.010 } 00:09:33.010 ], 00:09:33.010 "mp_policy": "active_passive" 00:09:33.010 } 00:09:33.010 } 00:09:33.010 ] 00:09:33.010 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2385360 00:09:33.010 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:33.010 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:33.010 Running I/O for 10 seconds... 00:09:33.944 Latency(us) 00:09:33.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.944 Nvme0n1 : 1.00 14128.00 55.19 0.00 0.00 0.00 0.00 0.00 00:09:33.944 =================================================================================================================== 00:09:33.944 Total : 14128.00 55.19 0.00 0.00 0.00 0.00 0.00 00:09:33.944 00:09:34.877 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:35.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.135 Nvme0n1 : 2.00 14285.50 55.80 0.00 0.00 0.00 0.00 0.00 00:09:35.135 =================================================================================================================== 00:09:35.135 Total : 14285.50 55.80 0.00 0.00 0.00 0.00 0.00 00:09:35.135 00:09:35.135 true 00:09:35.135 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:35.135 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:35.393 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:35.393 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:35.393 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2385360 00:09:35.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.959 Nvme0n1 : 3.00 14376.33 56.16 0.00 0.00 0.00 0.00 0.00 00:09:35.959 =================================================================================================================== 00:09:35.959 Total : 14376.33 56.16 0.00 0.00 0.00 0.00 0.00 00:09:35.959 00:09:37.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.333 Nvme0n1 : 4.00 14516.25 56.70 0.00 0.00 0.00 0.00 0.00 00:09:37.333 =================================================================================================================== 00:09:37.333 Total : 14516.25 56.70 0.00 0.00 0.00 0.00 0.00 00:09:37.333 00:09:38.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.266 Nvme0n1 : 5.00 14575.60 56.94 0.00 0.00 0.00 0.00 0.00 00:09:38.266 =================================================================================================================== 00:09:38.266 Total : 14575.60 56.94 0.00 0.00 0.00 0.00 0.00 00:09:38.266 00:09:39.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.199 Nvme0n1 : 6.00 14651.00 57.23 0.00 0.00 0.00 0.00 0.00 00:09:39.199 =================================================================================================================== 00:09:39.199 Total : 14651.00 57.23 0.00 0.00 0.00 0.00 0.00 00:09:39.199 00:09:40.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.133 Nvme0n1 : 7.00 14712.71 57.47 0.00 0.00 0.00 0.00 0.00 00:09:40.133 =================================================================================================================== 00:09:40.133 Total : 14712.71 57.47 0.00 0.00 0.00 0.00 0.00 00:09:40.133 00:09:41.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.065 Nvme0n1 : 8.00 14781.12 57.74 0.00 0.00 0.00 0.00 0.00 00:09:41.065 =================================================================================================================== 00:09:41.065 Total : 14781.12 57.74 0.00 0.00 0.00 0.00 0.00 00:09:41.065 00:09:41.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.998 Nvme0n1 : 9.00 14821.33 57.90 0.00 0.00 0.00 0.00 0.00 00:09:41.998 =================================================================================================================== 00:09:41.998 Total : 14821.33 57.90 0.00 0.00 0.00 0.00 0.00 00:09:41.998 00:09:43.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.372 Nvme0n1 : 10.00 14845.80 57.99 0.00 0.00 0.00 0.00 0.00 00:09:43.372 =================================================================================================================== 00:09:43.372 Total : 14845.80 57.99 0.00 0.00 0.00 0.00 0.00 00:09:43.372 00:09:43.372 00:09:43.372 Latency(us) 00:09:43.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.372 Nvme0n1 : 10.00 14851.44 58.01 0.00 0.00 8613.80 4951.61 17087.91 00:09:43.372 =================================================================================================================== 00:09:43.372 Total : 14851.44 58.01 0.00 0.00 8613.80 4951.61 17087.91 00:09:43.372 0 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2385228 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2385228 ']' 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2385228 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2385228 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2385228' 00:09:43.372 killing process with pid 2385228 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2385228 00:09:43.372 Received shutdown signal, test time was about 10.000000 seconds 00:09:43.372 00:09:43.372 Latency(us) 00:09:43.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.372 =================================================================================================================== 00:09:43.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2385228 00:09:43.372 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.630 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.888 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.888 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2382608 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2382608 00:09:44.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2382608 Killed "${NVMF_APP[@]}" "$@" 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2387211 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2387211 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2387211 ']' 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.459 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.460 07:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.460 [2024-07-25 07:15:16.762890] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:44.460 [2024-07-25 07:15:16.762965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.460 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.460 [2024-07-25 07:15:16.827021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.460 [2024-07-25 07:15:16.936331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.460 [2024-07-25 07:15:16.936393] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.460 [2024-07-25 07:15:16.936407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.460 [2024-07-25 07:15:16.936419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.460 [2024-07-25 07:15:16.936429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.460 [2024-07-25 07:15:16.936465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.765 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.023 [2024-07-25 07:15:17.350013] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:45.023 [2024-07-25 07:15:17.350152] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:45.023 [2024-07-25 07:15:17.350210] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0766a39c-ed61-4471-8faf-31faedf00baa 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0766a39c-ed61-4471-8faf-31faedf00baa 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.023 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:45.281 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0766a39c-ed61-4471-8faf-31faedf00baa -t 2000 00:09:45.539 [ 00:09:45.539 { 00:09:45.539 "name": "0766a39c-ed61-4471-8faf-31faedf00baa", 00:09:45.539 "aliases": [ 00:09:45.539 "lvs/lvol" 00:09:45.539 ], 00:09:45.539 "product_name": "Logical Volume", 00:09:45.539 "block_size": 4096, 00:09:45.539 "num_blocks": 38912, 00:09:45.539 "uuid": "0766a39c-ed61-4471-8faf-31faedf00baa", 00:09:45.539 "assigned_rate_limits": { 00:09:45.539 "rw_ios_per_sec": 0, 00:09:45.539 "rw_mbytes_per_sec": 0, 00:09:45.539 "r_mbytes_per_sec": 0, 00:09:45.539 "w_mbytes_per_sec": 0 00:09:45.539 }, 00:09:45.539 "claimed": false, 00:09:45.539 "zoned": false, 00:09:45.539 "supported_io_types": { 00:09:45.539 "read": true, 00:09:45.539 "write": true, 00:09:45.539 "unmap": true, 00:09:45.539 "flush": false, 00:09:45.539 "reset": true, 00:09:45.539 "nvme_admin": false, 00:09:45.539 "nvme_io": false, 00:09:45.539 "nvme_io_md": false, 00:09:45.539 "write_zeroes": true, 00:09:45.539 "zcopy": false, 00:09:45.539 "get_zone_info": false, 00:09:45.539 "zone_management": false, 00:09:45.539 "zone_append": false, 00:09:45.539 "compare": false, 00:09:45.539 "compare_and_write": false, 00:09:45.539 "abort": false, 00:09:45.539 "seek_hole": true, 00:09:45.539 "seek_data": true, 00:09:45.539 "copy": false, 00:09:45.539 "nvme_iov_md": false 00:09:45.539 }, 00:09:45.539 "driver_specific": { 00:09:45.539 "lvol": { 00:09:45.539 "lvol_store_uuid": "595f72ce-82db-4b2a-a642-4f392ae0161e", 00:09:45.539 "base_bdev": "aio_bdev", 00:09:45.539 "thin_provision": false, 00:09:45.539 "num_allocated_clusters": 38, 00:09:45.539 "snapshot": false, 00:09:45.539 "clone": false, 00:09:45.539 "esnap_clone": false 00:09:45.539 } 00:09:45.539 } 00:09:45.539 } 00:09:45.539 ] 00:09:45.539 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:45.539 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:45.540 07:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:45.798 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:45.798 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:45.798 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:46.055 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:46.055 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:46.312 [2024-07-25 07:15:18.642999] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:46.312 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:46.312 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:46.312 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:46.312 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:46.313 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:46.570 request: 00:09:46.570 { 00:09:46.570 "uuid": "595f72ce-82db-4b2a-a642-4f392ae0161e", 00:09:46.570 "method": "bdev_lvol_get_lvstores", 00:09:46.570 "req_id": 1 00:09:46.570 } 00:09:46.570 Got JSON-RPC error response 00:09:46.570 response: 00:09:46.570 { 00:09:46.570 "code": -19, 00:09:46.570 "message": "No such device" 00:09:46.570 } 00:09:46.570 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:46.570 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:46.570 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:46.570 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:46.571 07:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.827 aio_bdev 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0766a39c-ed61-4471-8faf-31faedf00baa 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0766a39c-ed61-4471-8faf-31faedf00baa 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.827 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.085 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0766a39c-ed61-4471-8faf-31faedf00baa -t 2000 00:09:47.342 [ 00:09:47.342 { 00:09:47.342 "name": "0766a39c-ed61-4471-8faf-31faedf00baa", 00:09:47.342 "aliases": [ 00:09:47.342 "lvs/lvol" 00:09:47.343 ], 00:09:47.343 "product_name": "Logical Volume", 00:09:47.343 "block_size": 4096, 00:09:47.343 "num_blocks": 38912, 00:09:47.343 "uuid": "0766a39c-ed61-4471-8faf-31faedf00baa", 00:09:47.343 "assigned_rate_limits": { 00:09:47.343 "rw_ios_per_sec": 0, 00:09:47.343 "rw_mbytes_per_sec": 0, 00:09:47.343 "r_mbytes_per_sec": 0, 00:09:47.343 "w_mbytes_per_sec": 0 00:09:47.343 }, 00:09:47.343 "claimed": false, 00:09:47.343 "zoned": false, 00:09:47.343 "supported_io_types": { 00:09:47.343 "read": true, 00:09:47.343 "write": true, 00:09:47.343 "unmap": true, 00:09:47.343 "flush": false, 00:09:47.343 "reset": true, 00:09:47.343 "nvme_admin": false, 00:09:47.343 "nvme_io": false, 00:09:47.343 "nvme_io_md": false, 00:09:47.343 "write_zeroes": true, 00:09:47.343 "zcopy": false, 00:09:47.343 "get_zone_info": false, 00:09:47.343 "zone_management": false, 00:09:47.343 "zone_append": false, 00:09:47.343 "compare": false, 00:09:47.343 "compare_and_write": false, 00:09:47.343 "abort": false, 00:09:47.343 "seek_hole": true, 00:09:47.343 "seek_data": true, 00:09:47.343 "copy": false, 00:09:47.343 "nvme_iov_md": false 00:09:47.343 }, 00:09:47.343 "driver_specific": { 00:09:47.343 "lvol": { 00:09:47.343 "lvol_store_uuid": "595f72ce-82db-4b2a-a642-4f392ae0161e", 00:09:47.343 "base_bdev": "aio_bdev", 00:09:47.343 "thin_provision": false, 00:09:47.343 "num_allocated_clusters": 38, 00:09:47.343 "snapshot": false, 00:09:47.343 "clone": false, 00:09:47.343 "esnap_clone": false 00:09:47.343 } 00:09:47.343 } 00:09:47.343 } 00:09:47.343 ] 00:09:47.343 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:47.343 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:47.343 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:47.601 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:47.601 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:47.601 07:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:47.858 07:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:47.858 07:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0766a39c-ed61-4471-8faf-31faedf00baa 00:09:48.116 07:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 595f72ce-82db-4b2a-a642-4f392ae0161e 00:09:48.374 07:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.632 07:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:48.632 00:09:48.632 real 0m19.312s 00:09:48.632 user 0m48.881s 00:09:48.632 sys 0m4.665s 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:48.632 ************************************ 00:09:48.632 END TEST lvs_grow_dirty 00:09:48.632 ************************************ 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:48.632 nvmf_trace.0 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.632 rmmod nvme_tcp 00:09:48.632 rmmod nvme_fabrics 00:09:48.632 rmmod nvme_keyring 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2387211 ']' 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2387211 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2387211 ']' 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2387211 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.632 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2387211 00:09:48.890 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.890 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.890 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2387211' 00:09:48.890 killing process with pid 2387211 00:09:48.890 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2387211 00:09:48.890 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2387211 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.148 07:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.046 00:09:51.046 real 0m42.436s 00:09:51.046 user 1m11.792s 00:09:51.046 sys 0m8.585s 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:51.046 ************************************ 00:09:51.046 END TEST nvmf_lvs_grow 00:09:51.046 ************************************ 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.046 07:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.046 ************************************ 00:09:51.047 START TEST nvmf_bdev_io_wait 00:09:51.047 ************************************ 00:09:51.047 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:51.304 * Looking for test storage... 00:09:51.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.304 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.305 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:53.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:53.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.206 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:53.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:53.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.207 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:53.465 00:09:53.465 --- 10.0.0.2 ping statistics --- 00:09:53.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.465 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:09:53.465 00:09:53.465 --- 10.0.0.1 ping statistics --- 00:09:53.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.465 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2389736 00:09:53.465 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2389736 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2389736 ']' 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.466 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.466 [2024-07-25 07:15:25.879395] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:53.466 [2024-07-25 07:15:25.879486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.466 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.466 [2024-07-25 07:15:25.953427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.723 [2024-07-25 07:15:26.074184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.723 [2024-07-25 07:15:26.074255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.723 [2024-07-25 07:15:26.074279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.723 [2024-07-25 07:15:26.074293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.723 [2024-07-25 07:15:26.074304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.723 [2024-07-25 07:15:26.074371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.723 [2024-07-25 07:15:26.074433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.723 [2024-07-25 07:15:26.074474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.723 [2024-07-25 07:15:26.074477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 [2024-07-25 07:15:26.933499] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 Malloc0 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.657 [2024-07-25 07:15:26.996086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.657 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2389891 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2389893 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2389895 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.657 { 00:09:54.657 "params": { 00:09:54.657 "name": "Nvme$subsystem", 00:09:54.657 "trtype": "$TEST_TRANSPORT", 00:09:54.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.657 "adrfam": "ipv4", 00:09:54.657 "trsvcid": "$NVMF_PORT", 00:09:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.657 "hdgst": ${hdgst:-false}, 00:09:54.657 "ddgst": ${ddgst:-false} 00:09:54.657 }, 00:09:54.657 "method": "bdev_nvme_attach_controller" 00:09:54.657 } 00:09:54.657 EOF 00:09:54.657 )") 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2389897 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.657 { 00:09:54.657 "params": { 00:09:54.657 "name": "Nvme$subsystem", 00:09:54.657 "trtype": "$TEST_TRANSPORT", 00:09:54.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.657 "adrfam": "ipv4", 00:09:54.657 "trsvcid": "$NVMF_PORT", 00:09:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.657 "hdgst": ${hdgst:-false}, 00:09:54.657 "ddgst": ${ddgst:-false} 00:09:54.657 }, 00:09:54.657 "method": "bdev_nvme_attach_controller" 00:09:54.657 } 00:09:54.657 EOF 00:09:54.657 )") 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.657 { 00:09:54.657 "params": { 00:09:54.657 "name": "Nvme$subsystem", 00:09:54.657 "trtype": "$TEST_TRANSPORT", 00:09:54.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.657 "adrfam": "ipv4", 00:09:54.657 "trsvcid": "$NVMF_PORT", 00:09:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.657 "hdgst": ${hdgst:-false}, 00:09:54.657 "ddgst": ${ddgst:-false} 00:09:54.657 }, 00:09:54.657 "method": "bdev_nvme_attach_controller" 00:09:54.657 } 00:09:54.657 EOF 00:09:54.657 )") 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:54.657 { 00:09:54.657 "params": { 00:09:54.657 "name": "Nvme$subsystem", 00:09:54.657 "trtype": "$TEST_TRANSPORT", 00:09:54.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.657 "adrfam": "ipv4", 00:09:54.657 "trsvcid": "$NVMF_PORT", 00:09:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.657 "hdgst": ${hdgst:-false}, 00:09:54.657 "ddgst": ${ddgst:-false} 00:09:54.657 }, 00:09:54.657 "method": "bdev_nvme_attach_controller" 00:09:54.657 } 00:09:54.657 EOF 00:09:54.657 )") 00:09:54.657 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2389891 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.658 "params": { 00:09:54.658 "name": "Nvme1", 00:09:54.658 "trtype": "tcp", 00:09:54.658 "traddr": "10.0.0.2", 00:09:54.658 "adrfam": "ipv4", 00:09:54.658 "trsvcid": "4420", 00:09:54.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.658 "hdgst": false, 00:09:54.658 "ddgst": false 00:09:54.658 }, 00:09:54.658 "method": "bdev_nvme_attach_controller" 00:09:54.658 }' 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.658 "params": { 00:09:54.658 "name": "Nvme1", 00:09:54.658 "trtype": "tcp", 00:09:54.658 "traddr": "10.0.0.2", 00:09:54.658 "adrfam": "ipv4", 00:09:54.658 "trsvcid": "4420", 00:09:54.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.658 "hdgst": false, 00:09:54.658 "ddgst": false 00:09:54.658 }, 00:09:54.658 "method": "bdev_nvme_attach_controller" 00:09:54.658 }' 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.658 "params": { 00:09:54.658 "name": "Nvme1", 00:09:54.658 "trtype": "tcp", 00:09:54.658 "traddr": "10.0.0.2", 00:09:54.658 "adrfam": "ipv4", 00:09:54.658 "trsvcid": "4420", 00:09:54.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.658 "hdgst": false, 00:09:54.658 "ddgst": false 00:09:54.658 }, 00:09:54.658 "method": "bdev_nvme_attach_controller" 00:09:54.658 }' 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:54.658 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:54.658 "params": { 00:09:54.658 "name": "Nvme1", 00:09:54.658 "trtype": "tcp", 00:09:54.658 "traddr": "10.0.0.2", 00:09:54.658 "adrfam": "ipv4", 00:09:54.658 "trsvcid": "4420", 00:09:54.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.658 "hdgst": false, 00:09:54.658 "ddgst": false 00:09:54.658 }, 00:09:54.658 "method": "bdev_nvme_attach_controller" 00:09:54.658 }' 00:09:54.658 [2024-07-25 07:15:27.044490] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:54.658 [2024-07-25 07:15:27.044580] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:54.658 [2024-07-25 07:15:27.045562] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:54.658 [2024-07-25 07:15:27.045562] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:54.658 [2024-07-25 07:15:27.045570] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:09:54.658 [2024-07-25 07:15:27.045648] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 07:15:27.045649] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 07:15:27.045650] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:54.658 --proc-type=auto ] 00:09:54.658 --proc-type=auto ] 00:09:54.658 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.916 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.916 [2024-07-25 07:15:27.226304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.916 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.916 [2024-07-25 07:15:27.322497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.916 [2024-07-25 07:15:27.326107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.916 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.916 [2024-07-25 07:15:27.423844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.916 [2024-07-25 07:15:27.424059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.174 [2024-07-25 07:15:27.499098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.174 [2024-07-25 07:15:27.526726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.174 [2024-07-25 07:15:27.593754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:55.174 Running I/O for 1 seconds... 00:09:55.436 Running I/O for 1 seconds... 00:09:55.436 Running I/O for 1 seconds... 00:09:55.436 Running I/O for 1 seconds... 00:09:56.377 00:09:56.377 Latency(us) 00:09:56.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.377 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:56.377 Nvme1n1 : 1.02 4484.12 17.52 0.00 0.00 28334.47 12281.93 42331.40 00:09:56.377 =================================================================================================================== 00:09:56.377 Total : 4484.12 17.52 0.00 0.00 28334.47 12281.93 42331.40 00:09:56.377 00:09:56.377 Latency(us) 00:09:56.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.377 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:56.377 Nvme1n1 : 1.00 93416.07 364.91 0.00 0.00 1364.87 467.25 2245.21 00:09:56.377 =================================================================================================================== 00:09:56.377 Total : 93416.07 364.91 0.00 0.00 1364.87 467.25 2245.21 00:09:56.377 00:09:56.377 Latency(us) 00:09:56.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.377 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:56.377 Nvme1n1 : 1.01 5156.20 20.14 0.00 0.00 24721.18 7573.05 59807.67 00:09:56.377 =================================================================================================================== 00:09:56.377 Total : 5156.20 20.14 0.00 0.00 24721.18 7573.05 59807.67 00:09:56.635 00:09:56.635 Latency(us) 00:09:56.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.635 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:56.635 Nvme1n1 : 1.01 10758.09 42.02 0.00 0.00 11852.60 6407.96 22039.51 00:09:56.635 =================================================================================================================== 00:09:56.635 Total : 10758.09 42.02 0.00 0.00 11852.60 6407.96 22039.51 00:09:56.635 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2389893 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2389895 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2389897 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.893 rmmod nvme_tcp 00:09:56.893 rmmod nvme_fabrics 00:09:56.893 rmmod nvme_keyring 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2389736 ']' 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2389736 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2389736 ']' 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2389736 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2389736 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2389736' 00:09:56.893 killing process with pid 2389736 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2389736 00:09:56.893 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2389736 00:09:57.151 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.151 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.151 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.151 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.151 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.152 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.152 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.152 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.678 00:09:59.678 real 0m8.122s 00:09:59.678 user 0m19.490s 00:09:59.678 sys 0m3.892s 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.678 ************************************ 00:09:59.678 END TEST nvmf_bdev_io_wait 00:09:59.678 ************************************ 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.678 ************************************ 00:09:59.678 START TEST nvmf_queue_depth 00:09:59.678 ************************************ 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:59.678 * Looking for test storage... 00:09:59.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.678 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.679 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:01.617 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:01.617 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:01.617 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.617 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:01.618 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:01.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:10:01.618 00:10:01.618 --- 10.0.0.2 ping statistics --- 00:10:01.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.618 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:10:01.618 00:10:01.618 --- 10.0.0.1 ping statistics --- 00:10:01.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.618 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2392129 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2392129 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2392129 ']' 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.618 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.618 [2024-07-25 07:15:33.890403] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:01.618 [2024-07-25 07:15:33.890500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.618 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.618 [2024-07-25 07:15:33.957597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.618 [2024-07-25 07:15:34.069586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.618 [2024-07-25 07:15:34.069643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.618 [2024-07-25 07:15:34.069672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.618 [2024-07-25 07:15:34.069684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.618 [2024-07-25 07:15:34.069695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.618 [2024-07-25 07:15:34.069726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 [2024-07-25 07:15:34.220939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 Malloc0 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 [2024-07-25 07:15:34.278760] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2392264 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2392264 /var/tmp/bdevperf.sock 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2392264 ']' 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:01.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.877 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.877 [2024-07-25 07:15:34.324180] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:01.877 [2024-07-25 07:15:34.324263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392264 ] 00:10:01.877 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.877 [2024-07-25 07:15:34.386369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.135 [2024-07-25 07:15:34.503446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.135 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.135 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:02.135 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:02.135 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.135 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.393 NVMe0n1 00:10:02.393 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.393 07:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:02.652 Running I/O for 10 seconds... 00:10:12.616 00:10:12.616 Latency(us) 00:10:12.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.616 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:12.616 Verification LBA range: start 0x0 length 0x4000 00:10:12.616 NVMe0n1 : 10.09 8492.23 33.17 0.00 0.00 120014.40 24660.95 72623.60 00:10:12.616 =================================================================================================================== 00:10:12.616 Total : 8492.23 33.17 0.00 0.00 120014.40 24660.95 72623.60 00:10:12.616 0 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2392264 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2392264 ']' 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2392264 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392264 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392264' 00:10:12.616 killing process with pid 2392264 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2392264 00:10:12.616 Received shutdown signal, test time was about 10.000000 seconds 00:10:12.616 00:10:12.616 Latency(us) 00:10:12.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.616 =================================================================================================================== 00:10:12.616 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:12.616 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2392264 00:10:12.874 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:12.874 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:12.874 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.874 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:13.132 rmmod nvme_tcp 00:10:13.132 rmmod nvme_fabrics 00:10:13.132 rmmod nvme_keyring 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2392129 ']' 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2392129 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2392129 ']' 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2392129 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392129 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392129' 00:10:13.132 killing process with pid 2392129 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2392129 00:10:13.132 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2392129 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.390 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:15.918 00:10:15.918 real 0m16.146s 00:10:15.918 user 0m22.942s 00:10:15.918 sys 0m2.960s 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.918 ************************************ 00:10:15.918 END TEST nvmf_queue_depth 00:10:15.918 ************************************ 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.918 ************************************ 00:10:15.918 START TEST nvmf_target_multipath 00:10:15.918 ************************************ 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:15.918 * Looking for test storage... 00:10:15.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.918 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:15.919 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:17.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:17.818 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:17.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:17.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.818 07:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.818 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.818 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.818 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.818 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.818 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:10:17.819 00:10:17.819 --- 10.0.0.2 ping statistics --- 00:10:17.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.819 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:17.819 00:10:17.819 --- 10.0.0.1 ping statistics --- 00:10:17.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.819 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:17.819 only one NIC for nvmf test 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.819 rmmod nvme_tcp 00:10:17.819 rmmod nvme_fabrics 00:10:17.819 rmmod nvme_keyring 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.819 07:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.714 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.715 00:10:19.715 real 0m4.333s 00:10:19.715 user 0m0.830s 00:10:19.715 sys 0m1.488s 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.715 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:19.715 ************************************ 00:10:19.715 END TEST nvmf_target_multipath 00:10:19.715 ************************************ 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.973 ************************************ 00:10:19.973 START TEST nvmf_zcopy 00:10:19.973 ************************************ 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:19.973 * Looking for test storage... 00:10:19.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.973 07:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:21.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:21.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:21.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:21.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.873 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:22.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:10:22.132 00:10:22.132 --- 10.0.0.2 ping statistics --- 00:10:22.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.132 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:10:22.132 00:10:22.132 --- 10.0.0.1 ping statistics --- 00:10:22.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.132 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2397460 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2397460 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2397460 ']' 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.132 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.132 [2024-07-25 07:15:54.584155] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:22.132 [2024-07-25 07:15:54.584248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.132 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.132 [2024-07-25 07:15:54.651904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.390 [2024-07-25 07:15:54.766544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.390 [2024-07-25 07:15:54.766611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.390 [2024-07-25 07:15:54.766639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.390 [2024-07-25 07:15:54.766656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.390 [2024-07-25 07:15:54.766666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.390 [2024-07-25 07:15:54.766697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 [2024-07-25 07:15:55.552111] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 [2024-07-25 07:15:55.568279] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 malloc0 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:23.325 { 00:10:23.325 "params": { 00:10:23.325 "name": "Nvme$subsystem", 00:10:23.325 "trtype": "$TEST_TRANSPORT", 00:10:23.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.325 "adrfam": "ipv4", 00:10:23.325 "trsvcid": "$NVMF_PORT", 00:10:23.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.325 "hdgst": ${hdgst:-false}, 00:10:23.325 "ddgst": ${ddgst:-false} 00:10:23.325 }, 00:10:23.325 "method": "bdev_nvme_attach_controller" 00:10:23.325 } 00:10:23.325 EOF 00:10:23.325 )") 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:23.325 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:23.325 "params": { 00:10:23.325 "name": "Nvme1", 00:10:23.325 "trtype": "tcp", 00:10:23.325 "traddr": "10.0.0.2", 00:10:23.325 "adrfam": "ipv4", 00:10:23.325 "trsvcid": "4420", 00:10:23.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:23.325 "hdgst": false, 00:10:23.325 "ddgst": false 00:10:23.325 }, 00:10:23.325 "method": "bdev_nvme_attach_controller" 00:10:23.325 }' 00:10:23.325 [2024-07-25 07:15:55.663134] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:23.325 [2024-07-25 07:15:55.663222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397613 ] 00:10:23.325 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.325 [2024-07-25 07:15:55.731592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.325 [2024-07-25 07:15:55.850758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.892 Running I/O for 10 seconds... 00:10:33.912 00:10:33.912 Latency(us) 00:10:33.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.912 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:33.912 Verification LBA range: start 0x0 length 0x1000 00:10:33.912 Nvme1n1 : 10.02 5880.89 45.94 0.00 0.00 21707.24 2378.71 31263.10 00:10:33.912 =================================================================================================================== 00:10:33.912 Total : 5880.89 45.94 0.00 0.00 21707.24 2378.71 31263.10 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2398824 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:34.170 { 00:10:34.170 "params": { 00:10:34.170 "name": "Nvme$subsystem", 00:10:34.170 "trtype": "$TEST_TRANSPORT", 00:10:34.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:34.170 "adrfam": "ipv4", 00:10:34.170 "trsvcid": "$NVMF_PORT", 00:10:34.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:34.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:34.170 "hdgst": ${hdgst:-false}, 00:10:34.170 "ddgst": ${ddgst:-false} 00:10:34.170 }, 00:10:34.170 "method": "bdev_nvme_attach_controller" 00:10:34.170 } 00:10:34.170 EOF 00:10:34.170 )") 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:34.170 [2024-07-25 07:16:06.514487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.170 [2024-07-25 07:16:06.514544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:34.170 07:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:34.170 "params": { 00:10:34.170 "name": "Nvme1", 00:10:34.170 "trtype": "tcp", 00:10:34.170 "traddr": "10.0.0.2", 00:10:34.170 "adrfam": "ipv4", 00:10:34.170 "trsvcid": "4420", 00:10:34.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:34.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:34.170 "hdgst": false, 00:10:34.170 "ddgst": false 00:10:34.170 }, 00:10:34.170 "method": "bdev_nvme_attach_controller" 00:10:34.170 }' 00:10:34.170 [2024-07-25 07:16:06.522431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.170 [2024-07-25 07:16:06.522453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.170 [2024-07-25 07:16:06.530451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.170 [2024-07-25 07:16:06.530473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.170 [2024-07-25 07:16:06.538473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.170 [2024-07-25 07:16:06.538493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.170 [2024-07-25 07:16:06.546495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.170 [2024-07-25 07:16:06.546516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.170 [2024-07-25 07:16:06.551317] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:34.171 [2024-07-25 07:16:06.551390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398824 ] 00:10:34.171 [2024-07-25 07:16:06.554518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.554563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.562555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.562576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.570583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.570611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.171 [2024-07-25 07:16:06.578598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.578622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.586630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.586655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.594651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.594675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.602686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.602711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.610708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.610732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.615869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.171 [2024-07-25 07:16:06.618733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.618759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.626786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.626827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.634780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.634806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.642800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.642824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.650830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.650856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.658842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.658867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.666864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.666889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.674889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.674913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.682937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.682973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.690968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.691007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.171 [2024-07-25 07:16:06.698955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.171 [2024-07-25 07:16:06.698983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.706973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.706998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.714993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.715018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.723014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.723038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.731036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.731060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.738850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.429 [2024-07-25 07:16:06.739058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.739082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.747080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.747104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.755127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.755159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.763157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.763197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.771179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.771218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.779201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.779252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.787228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.787290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.795255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.795306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.803292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.803330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.811268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.811304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.819326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.819360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.827349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.827385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.835374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.835409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.843350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.843372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.851361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.851382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.859395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.859422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.867410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.867434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.875433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.875456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.883455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.883479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.891473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.891495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.899495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.899531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.907531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.907556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.915560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.915585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.923592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.923618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.931616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.931654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.939645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.939673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.947666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.947694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.429 [2024-07-25 07:16:06.955678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.429 [2024-07-25 07:16:06.955698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 [2024-07-25 07:16:06.963706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.688 [2024-07-25 07:16:06.963735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 Running I/O for 5 seconds... 00:10:34.688 [2024-07-25 07:16:06.971727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.688 [2024-07-25 07:16:06.971754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 [2024-07-25 07:16:06.984846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.688 [2024-07-25 07:16:06.984878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 [2024-07-25 07:16:06.996355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.688 [2024-07-25 07:16:06.996399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 [2024-07-25 07:16:07.010041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.688 [2024-07-25 07:16:07.010073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 [2024-07-25 07:16:07.022714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.688 [2024-07-25 07:16:07.022745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.688 [2024-07-25 07:16:07.035694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.035725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.048620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.048650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.061814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.061845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.074898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.074929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.087275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.087318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.099847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.099878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.112427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.112469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.125520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.125564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.138264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.138315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.150377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.150404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.162931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.162961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.175557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.175587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.187698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.187728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.200822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.200852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.689 [2024-07-25 07:16:07.212985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.689 [2024-07-25 07:16:07.213013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.225889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.225919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.237914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.237944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.250590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.250620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.263251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.263281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.275923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.275953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.288403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.288430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.300933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.947 [2024-07-25 07:16:07.300964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.947 [2024-07-25 07:16:07.314160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.314190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.326905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.326935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.339324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.339366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.351779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.351817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.364708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.364738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.376873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.376903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.389165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.389195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.401872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.401902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.413729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.413760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.426442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.426469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.438702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.438732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.451287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.451331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.464152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.464183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.948 [2024-07-25 07:16:07.476732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.948 [2024-07-25 07:16:07.476760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.489613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.489644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.502213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.502252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.514782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.514813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.527371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.527399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.540433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.540461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.553720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.553751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.565967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.565997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.578411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.578438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.591277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.591313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.602817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.602845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.614709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.614736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.626850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.626877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.638603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.638631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.650699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.650726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.664349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.664377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.675820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.675847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.687580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.687607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.699206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.699233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.711287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.711314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.723415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.723443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.206 [2024-07-25 07:16:07.734800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.206 [2024-07-25 07:16:07.734827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.745922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.745949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.757413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.757440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.768698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.768726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.780364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.780392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.791706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.791733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.802988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.803015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.814341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.814381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.825460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.825488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.836539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.836567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.849844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.849871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.860269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.860296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.871848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.871876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.464 [2024-07-25 07:16:07.884286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.464 [2024-07-25 07:16:07.884329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.896612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.896642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.909456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.909498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.922042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.922072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.934849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.934879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.947426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.947453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.959348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.959375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.971965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.971996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.465 [2024-07-25 07:16:07.984727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.465 [2024-07-25 07:16:07.984758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:07.997373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:07.997402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.009423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.009452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.021951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.021982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.034768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.034810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.047037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.047077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.059803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.059834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.072462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.072491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.085002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.085033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.097853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.097884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.110820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.110851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.123764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.123794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.137009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.137041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.149703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.149733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.162759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.162789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.175460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.175488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.188519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.188547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.201470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.201497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.214407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.214435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.227363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.227391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.723 [2024-07-25 07:16:08.239810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.723 [2024-07-25 07:16:08.239841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.252863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.252894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.265121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.265151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.277871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.277902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.290347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.290374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.302880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.302910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.315556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.315586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.328278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.328323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.340891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.340920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.353337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.353364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.366310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.366337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.379107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.379137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.392205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.392235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.404945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.404975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.417023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.417053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.429223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.429262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.441796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.441826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.453980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.454010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.981 [2024-07-25 07:16:08.466358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.981 [2024-07-25 07:16:08.466385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.982 [2024-07-25 07:16:08.479384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.982 [2024-07-25 07:16:08.479411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.982 [2024-07-25 07:16:08.492296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.982 [2024-07-25 07:16:08.492323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.982 [2024-07-25 07:16:08.507121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.982 [2024-07-25 07:16:08.507151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.518848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.518879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.530738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.530769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.543632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.543663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.556072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.556102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.568769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.568800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.581357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.581385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.593537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.593581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.605606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.605637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.618330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.618358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.630763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.630791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.643389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.643416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.655928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.655959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.668252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.668282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.680448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.680476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.693162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.693193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.705730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.705761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.718161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.718192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.730971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.731001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.743754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.743784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.240 [2024-07-25 07:16:08.756908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.240 [2024-07-25 07:16:08.756938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.769845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.769875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.782578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.782608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.795172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.795203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.808770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.808800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.821412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.821444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.834128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.834159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.847567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.847610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.860142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.860173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.872659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.872689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.885040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.885071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.896801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.896829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.908022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.908050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.919741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.919769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.931040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.931083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.942676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.942704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.954587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.954615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.965709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.965737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.977000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.977041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:08.988558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:08.988585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:09.000440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:09.000467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:09.012518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:09.012545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.498 [2024-07-25 07:16:09.024363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.498 [2024-07-25 07:16:09.024391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.035925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.035952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.047840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.047867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.061812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.061840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.073214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.073250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.084509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.084536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.096011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.096037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.107456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.107484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.119155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.119183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.130492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.130520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.141729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.141757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.153377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.153405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.165144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.165172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.176735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.176766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.189515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.189560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.201988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.202019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.215087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.215123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.227392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.227420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.239499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.239543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.252489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.252534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.265684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.265715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.757 [2024-07-25 07:16:09.278430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.757 [2024-07-25 07:16:09.278457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.291452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.291479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.304066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.304096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.316650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.316681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.328577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.328607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.341472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.341499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.354118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.354148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.367113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.367143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.379956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.379986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.392756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.392786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.405318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.405344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.417807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.417837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.430254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.430298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.442552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.442583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.455125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.455164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.468144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.468174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.480627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.480658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.493116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.493146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.015 [2024-07-25 07:16:09.505987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.015 [2024-07-25 07:16:09.506017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.016 [2024-07-25 07:16:09.518696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.016 [2024-07-25 07:16:09.518726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.016 [2024-07-25 07:16:09.531341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.016 [2024-07-25 07:16:09.531368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.016 [2024-07-25 07:16:09.543598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.016 [2024-07-25 07:16:09.543629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.556206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.556236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.568578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.568608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.581191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.581221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.593624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.593654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.605949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.605978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.618409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.618436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.631020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.631050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.643190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.643220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.655556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.655586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.667862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.667892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.680499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.680541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.693069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.693109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.705253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.705298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.717680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.717710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.730022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.730052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.742748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.742778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.755086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.755116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.766985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.767015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.779254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.779298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.274 [2024-07-25 07:16:09.792327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.274 [2024-07-25 07:16:09.792354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.805109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.805139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.817118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.817149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.829233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.829288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.841657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.841688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.853781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.853811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.866313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.866340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.878464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.878491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.891052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.891083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.903803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.903833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.916313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.916355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-07-25 07:16:09.929371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-07-25 07:16:09.929405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:09.942221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:09.942261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:09.955205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:09.955234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:09.967954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:09.967984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:09.980499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:09.980541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:09.993322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:09.993349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:10.006122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:10.006154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:10.018782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:10.018813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:10.032015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:10.032046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:10.045128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:10.045163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.533 [2024-07-25 07:16:10.057884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.533 [2024-07-25 07:16:10.057913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.070346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.070374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.082898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.082929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.096065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.096095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.109108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.109139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.121801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.121831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.134553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.134583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.147630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.147660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.160912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.160942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.174126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.174155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.186465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.186493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.198789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.198817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.209958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.209985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.221187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.221213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.232447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.232474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.243575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.243602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.255196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.255224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.266143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.266171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.277771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.277799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.288945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.288973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.302029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.302058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-07-25 07:16:10.312929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-07-25 07:16:10.312965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.323967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.323995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.335955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.335983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.347450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.347478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.359180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.359208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.371159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.371187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.382885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.382913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.394490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.394517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.406235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.406271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.418004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.418030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.430028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.430055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.443867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.443894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.454968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.454995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.466781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.466809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.479396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.479424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.492086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.492116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.504604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.504635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.517662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.517693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.529844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.529874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.541818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.541849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.554378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.554406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-07-25 07:16:10.566845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-07-25 07:16:10.566874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.308 [2024-07-25 07:16:10.579329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.308 [2024-07-25 07:16:10.579355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.591736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.591766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.604129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.604160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.616757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.616787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.629144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.629175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.641637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.641667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.654145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.654175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.665774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.665804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.678213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.678250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.691141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.691172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.704069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.704099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.717494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.717537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.729988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.730018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.742481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.742508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.754918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.754948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.767632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.767662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.780229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.780267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.793160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.793190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.806034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.806065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.818401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.818428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.309 [2024-07-25 07:16:10.830538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.309 [2024-07-25 07:16:10.830568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.843254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.843297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.856125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.856155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.868856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.868886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.881369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.881396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.893887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.893917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.906390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.906417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.918997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.919028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.931047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.931077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.943373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.943401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.955481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.955508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.968350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.968378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.980951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.567 [2024-07-25 07:16:10.980982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.567 [2024-07-25 07:16:10.993491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:10.993534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.006378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.006417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.019002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.019032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.031458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.031485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.043424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.043451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.055454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.055482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.068171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.068201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.080749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.080779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.568 [2024-07-25 07:16:11.093179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.568 [2024-07-25 07:16:11.093219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.105661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.105691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.118206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.118236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.130874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.130904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.143626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.143657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.156785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.156815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.169574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.169605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.182467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.182494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.195474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.195501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.208482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.208524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.220937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.220967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.232980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.233011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.245569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.245613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.258152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.258182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.270648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.270678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.283070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.283100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.295746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.295776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.308549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.308580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.320982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.321012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.333784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.333828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.826 [2024-07-25 07:16:11.346531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.826 [2024-07-25 07:16:11.346561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.358717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.358747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.371195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.371225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.383767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.383797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.397271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.397314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.410014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.410045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.422580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.422612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.436122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.436152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.448535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.448565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.461148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.461178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.473794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.473824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.486658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.486689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.498834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.498865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.511747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.511778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.524303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.524331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.536749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.536779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.549358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.549386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.562072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.562103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.574116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.574154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.586831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.586861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.599643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.599674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.084 [2024-07-25 07:16:11.611874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.084 [2024-07-25 07:16:11.611901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.624683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.624713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.637626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.637656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.649729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.649760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.662315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.662343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.674594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.674624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.687376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.687403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.699596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.699626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.712256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.712300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.724700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.724730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.737425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.737452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.750540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.750571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.763036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.763066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.775916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.775947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.789101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.789131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.801488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.801515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.814385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.814422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.826843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.826873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.839679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.839709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.852041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.852071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.343 [2024-07-25 07:16:11.865054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.343 [2024-07-25 07:16:11.865084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.881388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.881435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.893262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.893306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.905732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.905762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.918713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.918743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.931735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.931766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.944297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.944325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.957096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-07-25 07:16:11.957126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-07-25 07:16:11.969572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:11.969603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:11.982098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:11.982129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:11.992478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:11.992505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 00:10:39.602 Latency(us) 00:10:39.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.602 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:39.602 Nvme1n1 : 5.01 10221.16 79.85 0.00 0.00 12506.53 5485.61 23204.60 00:10:39.602 =================================================================================================================== 00:10:39.602 Total : 10221.16 79.85 0.00 0.00 12506.53 5485.61 23204.60 00:10:39.602 [2024-07-25 07:16:11.997152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:11.997181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.005182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.005207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.013188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.013214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.021266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.021313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.029309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.029364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.037329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.037381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.045359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.045413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.053368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.053421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.061382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.061435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.069406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.069457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.077412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.077461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.085445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.085500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.093473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.093526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.101486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.101553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.109505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.109576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.117528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.117593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-07-25 07:16:12.125546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-07-25 07:16:12.125597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.133572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.133617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.141579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.141619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.149569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.149595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.157594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.157619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.165627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.165651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.173621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.173643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.181710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.181768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.189721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.189784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.197712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.197744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.205720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.205745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.213740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.213765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.221762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.221787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.229774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.229797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.237881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.237934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.245877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.245926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.253852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.253879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.261868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.261893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 [2024-07-25 07:16:12.269890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.860 [2024-07-25 07:16:12.269915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2398824) - No such process 00:10:39.860 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2398824 00:10:39.860 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.860 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.860 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.860 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 delay0 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.861 07:16:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:39.861 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.861 [2024-07-25 07:16:12.343881] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:46.414 Initializing NVMe Controllers 00:10:46.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:46.414 Initialization complete. Launching workers. 00:10:46.414 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:10:46.414 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:10:46.414 success 140, unsuccess 240, failed 0 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.414 rmmod nvme_tcp 00:10:46.414 rmmod nvme_fabrics 00:10:46.414 rmmod nvme_keyring 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2397460 ']' 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2397460 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2397460 ']' 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2397460 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.414 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2397460 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2397460' 00:10:46.415 killing process with pid 2397460 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2397460 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2397460 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.415 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.943 00:10:48.943 real 0m28.593s 00:10:48.943 user 0m41.396s 00:10:48.943 sys 0m8.647s 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.943 ************************************ 00:10:48.943 END TEST nvmf_zcopy 00:10:48.943 ************************************ 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.943 ************************************ 00:10:48.943 START TEST nvmf_nmic 00:10:48.943 ************************************ 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:48.943 * Looking for test storage... 00:10:48.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.943 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.944 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:50.846 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:50.846 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:50.846 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:50.846 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.846 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:10:50.847 00:10:50.847 --- 10.0.0.2 ping statistics --- 00:10:50.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.847 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:10:50.847 00:10:50.847 --- 10.0.0.1 ping statistics --- 00:10:50.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.847 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2402206 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2402206 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2402206 ']' 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.847 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.847 [2024-07-25 07:16:23.298954] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:10:50.847 [2024-07-25 07:16:23.299042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.847 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.847 [2024-07-25 07:16:23.363681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.124 [2024-07-25 07:16:23.476729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.124 [2024-07-25 07:16:23.476803] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.124 [2024-07-25 07:16:23.476817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.124 [2024-07-25 07:16:23.476844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.124 [2024-07-25 07:16:23.476854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.124 [2024-07-25 07:16:23.476940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.124 [2024-07-25 07:16:23.477007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.124 [2024-07-25 07:16:23.477073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.124 [2024-07-25 07:16:23.477076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.124 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.124 [2024-07-25 07:16:23.634732] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 Malloc0 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 [2024-07-25 07:16:23.687195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:51.384 test case1: single bdev can't be used in multiple subsystems 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 [2024-07-25 07:16:23.711053] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:51.384 [2024-07-25 07:16:23.711081] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:51.384 [2024-07-25 07:16:23.711112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.384 request: 00:10:51.384 { 00:10:51.384 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:51.384 "namespace": { 00:10:51.384 "bdev_name": "Malloc0", 00:10:51.384 "no_auto_visible": false 00:10:51.384 }, 00:10:51.384 "method": "nvmf_subsystem_add_ns", 00:10:51.384 "req_id": 1 00:10:51.384 } 00:10:51.384 Got JSON-RPC error response 00:10:51.384 response: 00:10:51.384 { 00:10:51.384 "code": -32602, 00:10:51.384 "message": "Invalid parameters" 00:10:51.384 } 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:51.384 Adding namespace failed - expected result. 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:51.384 test case2: host connect to nvmf target in multiple paths 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.384 [2024-07-25 07:16:23.719162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.384 07:16:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.950 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:52.516 07:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.516 07:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:52.516 07:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.516 07:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:52.516 07:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:55.042 07:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:55.042 [global] 00:10:55.042 thread=1 00:10:55.042 invalidate=1 00:10:55.042 rw=write 00:10:55.042 time_based=1 00:10:55.042 runtime=1 00:10:55.042 ioengine=libaio 00:10:55.042 direct=1 00:10:55.042 bs=4096 00:10:55.042 iodepth=1 00:10:55.042 norandommap=0 00:10:55.042 numjobs=1 00:10:55.042 00:10:55.042 verify_dump=1 00:10:55.042 verify_backlog=512 00:10:55.042 verify_state_save=0 00:10:55.042 do_verify=1 00:10:55.042 verify=crc32c-intel 00:10:55.042 [job0] 00:10:55.042 filename=/dev/nvme0n1 00:10:55.042 Could not set queue depth (nvme0n1) 00:10:55.042 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.042 fio-3.35 00:10:55.042 Starting 1 thread 00:10:55.975 00:10:55.975 job0: (groupid=0, jobs=1): err= 0: pid=2402729: Thu Jul 25 07:16:28 2024 00:10:55.975 read: IOPS=1927, BW=7708KiB/s (7893kB/s)(7716KiB/1001msec) 00:10:55.975 slat (nsec): min=5437, max=69191, avg=11042.74, stdev=5718.01 00:10:55.975 clat (usec): min=242, max=418, avg=283.74, stdev=18.79 00:10:55.975 lat (usec): min=250, max=433, avg=294.79, stdev=19.41 00:10:55.975 clat percentiles (usec): 00:10:55.975 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:10:55.975 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:55.975 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:10:55.975 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 420], 00:10:55.975 | 99.99th=[ 420] 00:10:55.975 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:55.975 slat (nsec): min=6287, max=65210, avg=14440.30, stdev=5965.23 00:10:55.975 clat (usec): min=160, max=271, avg=188.99, stdev=11.41 00:10:55.975 lat (usec): min=171, max=336, avg=203.43, stdev=13.78 00:10:55.975 clat percentiles (usec): 00:10:55.975 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:10:55.975 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:10:55.975 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:10:55.975 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 260], 99.95th=[ 262], 00:10:55.975 | 99.99th=[ 273] 00:10:55.975 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:55.975 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:55.975 lat (usec) : 250=51.80%, 500=48.20% 00:10:55.975 cpu : usr=2.70%, sys=5.90%, ctx=3977, majf=0, minf=2 00:10:55.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.975 issued rwts: total=1929,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.975 00:10:55.975 Run status group 0 (all jobs): 00:10:55.975 READ: bw=7708KiB/s (7893kB/s), 7708KiB/s-7708KiB/s (7893kB/s-7893kB/s), io=7716KiB (7901kB), run=1001-1001msec 00:10:55.975 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:55.975 00:10:55.975 Disk stats (read/write): 00:10:55.975 nvme0n1: ios=1626/2048, merge=0/0, ticks=459/389, in_queue=848, util=92.18% 00:10:55.975 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.233 rmmod nvme_tcp 00:10:56.233 rmmod nvme_fabrics 00:10:56.233 rmmod nvme_keyring 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2402206 ']' 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2402206 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2402206 ']' 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2402206 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2402206 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2402206' 00:10:56.233 killing process with pid 2402206 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2402206 00:10:56.233 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2402206 00:10:56.491 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.491 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.491 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.492 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.492 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.492 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.492 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.492 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.023 00:10:59.023 real 0m10.090s 00:10:59.023 user 0m22.483s 00:10:59.023 sys 0m2.496s 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 ************************************ 00:10:59.023 END TEST nvmf_nmic 00:10:59.023 ************************************ 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 ************************************ 00:10:59.023 START TEST nvmf_fio_target 00:10:59.023 ************************************ 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:59.023 * Looking for test storage... 00:10:59.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.023 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.024 07:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.925 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.925 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.925 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:11:00.926 00:11:00.926 --- 10.0.0.2 ping statistics --- 00:11:00.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.926 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:00.926 00:11:00.926 --- 10.0.0.1 ping statistics --- 00:11:00.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.926 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2404870 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2404870 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2404870 ']' 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.926 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.926 [2024-07-25 07:16:33.225311] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:00.926 [2024-07-25 07:16:33.225393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.926 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.926 [2024-07-25 07:16:33.294046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.926 [2024-07-25 07:16:33.414908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.926 [2024-07-25 07:16:33.414976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.926 [2024-07-25 07:16:33.415003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.926 [2024-07-25 07:16:33.415017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.926 [2024-07-25 07:16:33.415030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.926 [2024-07-25 07:16:33.415116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.926 [2024-07-25 07:16:33.415169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.926 [2024-07-25 07:16:33.415223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.926 [2024-07-25 07:16:33.415226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.857 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:02.114 [2024-07-25 07:16:34.433647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.114 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.372 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:02.372 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.630 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:02.630 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.888 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:02.888 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.146 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:03.146 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:03.403 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.661 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:03.661 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.919 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:03.919 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.177 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:04.177 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:04.435 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.692 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:04.693 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.950 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:04.950 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.208 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.465 [2024-07-25 07:16:37.826502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.465 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:05.722 07:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:05.980 07:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.546 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:06.546 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.546 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.546 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:06.546 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:06.546 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:09.094 07:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.094 [global] 00:11:09.094 thread=1 00:11:09.094 invalidate=1 00:11:09.094 rw=write 00:11:09.094 time_based=1 00:11:09.094 runtime=1 00:11:09.094 ioengine=libaio 00:11:09.094 direct=1 00:11:09.094 bs=4096 00:11:09.094 iodepth=1 00:11:09.094 norandommap=0 00:11:09.094 numjobs=1 00:11:09.094 00:11:09.094 verify_dump=1 00:11:09.094 verify_backlog=512 00:11:09.094 verify_state_save=0 00:11:09.094 do_verify=1 00:11:09.094 verify=crc32c-intel 00:11:09.094 [job0] 00:11:09.094 filename=/dev/nvme0n1 00:11:09.094 [job1] 00:11:09.094 filename=/dev/nvme0n2 00:11:09.094 [job2] 00:11:09.094 filename=/dev/nvme0n3 00:11:09.094 [job3] 00:11:09.094 filename=/dev/nvme0n4 00:11:09.095 Could not set queue depth (nvme0n1) 00:11:09.095 Could not set queue depth (nvme0n2) 00:11:09.095 Could not set queue depth (nvme0n3) 00:11:09.095 Could not set queue depth (nvme0n4) 00:11:09.095 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 fio-3.35 00:11:09.095 Starting 4 threads 00:11:10.049 00:11:10.049 job0: (groupid=0, jobs=1): err= 0: pid=2406009: Thu Jul 25 07:16:42 2024 00:11:10.049 read: IOPS=568, BW=2274KiB/s (2329kB/s)(2324KiB/1022msec) 00:11:10.049 slat (nsec): min=6839, max=59106, avg=28471.63, stdev=9303.08 00:11:10.049 clat (usec): min=285, max=42030, avg=1182.94, stdev=5349.11 00:11:10.049 lat (usec): min=299, max=42045, avg=1211.41, stdev=5347.94 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 314], 5.00th=[ 347], 10.00th=[ 379], 20.00th=[ 408], 00:11:10.049 | 30.00th=[ 441], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 502], 00:11:10.049 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 603], 00:11:10.049 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:10.049 | 99.99th=[42206] 00:11:10.049 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:11:10.049 slat (nsec): min=6162, max=65815, avg=19037.47, stdev=10938.97 00:11:10.049 clat (usec): min=174, max=521, avg=281.34, stdev=58.27 00:11:10.049 lat (usec): min=182, max=560, avg=300.37, stdev=62.43 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 182], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 233], 00:11:10.049 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:11:10.049 | 70.00th=[ 297], 80.00th=[ 334], 90.00th=[ 367], 95.00th=[ 392], 00:11:10.049 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 523], 00:11:10.049 | 99.99th=[ 523] 00:11:10.049 bw ( KiB/s): min= 2320, max= 5872, per=25.90%, avg=4096.00, stdev=2511.64, samples=2 00:11:10.049 iops : min= 580, max= 1468, avg=1024.00, stdev=627.91, samples=2 00:11:10.049 lat (usec) : 250=19.44%, 500=65.55%, 750=14.33%, 1000=0.06% 00:11:10.049 lat (msec) : 50=0.62% 00:11:10.049 cpu : usr=2.64%, sys=2.94%, ctx=1605, majf=0, minf=1 00:11:10.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 issued rwts: total=581,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.049 job1: (groupid=0, jobs=1): err= 0: pid=2406010: Thu Jul 25 07:16:42 2024 00:11:10.049 read: IOPS=23, BW=92.7KiB/s (94.9kB/s)(96.0KiB/1036msec) 00:11:10.049 slat (nsec): min=6271, max=33233, avg=23201.92, stdev=8905.74 00:11:10.049 clat (usec): min=376, max=42059, avg=37652.35, stdev=11467.16 00:11:10.049 lat (usec): min=409, max=42086, avg=37675.56, stdev=11466.94 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 375], 5.00th=[ 482], 10.00th=[41157], 20.00th=[41157], 00:11:10.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:10.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:10.049 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:10.049 | 99.99th=[42206] 00:11:10.049 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:10.049 slat (nsec): min=6251, max=49239, avg=15568.60, stdev=8604.62 00:11:10.049 clat (usec): min=183, max=1142, avg=237.77, stdev=66.11 00:11:10.049 lat (usec): min=191, max=1171, avg=253.34, stdev=66.72 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:11:10.049 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:11:10.049 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 281], 00:11:10.049 | 99.00th=[ 420], 99.50th=[ 857], 99.90th=[ 1139], 99.95th=[ 1139], 00:11:10.049 | 99.99th=[ 1139] 00:11:10.049 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:11:10.049 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:10.049 lat (usec) : 250=76.68%, 500=18.47%, 750=0.19%, 1000=0.37% 00:11:10.049 lat (msec) : 2=0.19%, 50=4.10% 00:11:10.049 cpu : usr=0.58%, sys=0.58%, ctx=536, majf=0, minf=1 00:11:10.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.049 job2: (groupid=0, jobs=1): err= 0: pid=2406012: Thu Jul 25 07:16:42 2024 00:11:10.049 read: IOPS=977, BW=3910KiB/s (4004kB/s)(4004KiB/1024msec) 00:11:10.049 slat (nsec): min=5656, max=68850, avg=20772.62, stdev=11304.19 00:11:10.049 clat (usec): min=274, max=41993, avg=643.95, stdev=3417.18 00:11:10.049 lat (usec): min=281, max=42010, avg=664.72, stdev=3417.21 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 318], 00:11:10.049 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 359], 00:11:10.049 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 461], 00:11:10.049 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:11:10.049 | 99.99th=[42206] 00:11:10.049 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:11:10.049 slat (usec): min=6, max=22038, avg=43.88, stdev=688.93 00:11:10.049 clat (usec): min=194, max=815, avg=294.63, stdev=96.11 00:11:10.049 lat (usec): min=202, max=22458, avg=338.51, stdev=700.62 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 237], 00:11:10.049 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 273], 00:11:10.049 | 70.00th=[ 289], 80.00th=[ 330], 90.00th=[ 408], 95.00th=[ 490], 00:11:10.049 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[ 816], 00:11:10.049 | 99.99th=[ 816] 00:11:10.049 bw ( KiB/s): min= 1712, max= 6480, per=25.90%, avg=4096.00, stdev=3371.49, samples=2 00:11:10.049 iops : min= 428, max= 1620, avg=1024.00, stdev=842.87, samples=2 00:11:10.049 lat (usec) : 250=18.96%, 500=77.28%, 750=2.96%, 1000=0.44% 00:11:10.049 lat (msec) : 50=0.35% 00:11:10.049 cpu : usr=2.54%, sys=4.01%, ctx=2029, majf=0, minf=1 00:11:10.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 issued rwts: total=1001,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.049 job3: (groupid=0, jobs=1): err= 0: pid=2406013: Thu Jul 25 07:16:42 2024 00:11:10.049 read: IOPS=1328, BW=5315KiB/s (5442kB/s)(5320KiB/1001msec) 00:11:10.049 slat (nsec): min=5296, max=79287, avg=20137.76, stdev=10834.42 00:11:10.049 clat (usec): min=281, max=41464, avg=441.46, stdev=1126.51 00:11:10.049 lat (usec): min=299, max=41476, avg=461.60, stdev=1126.55 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 355], 20.00th=[ 375], 00:11:10.049 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 424], 00:11:10.049 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 469], 00:11:10.049 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 652], 99.95th=[41681], 00:11:10.049 | 99.99th=[41681] 00:11:10.049 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:10.049 slat (nsec): min=5695, max=63926, avg=15646.80, stdev=7109.33 00:11:10.049 clat (usec): min=182, max=749, avg=226.25, stdev=33.95 00:11:10.049 lat (usec): min=190, max=767, avg=241.90, stdev=35.49 00:11:10.049 clat percentiles (usec): 00:11:10.049 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:11:10.049 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:11:10.049 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 277], 00:11:10.049 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 603], 99.95th=[ 750], 00:11:10.049 | 99.99th=[ 750] 00:11:10.049 bw ( KiB/s): min= 6800, max= 6800, per=43.00%, avg=6800.00, stdev= 0.00, samples=1 00:11:10.049 iops : min= 1700, max= 1700, avg=1700.00, stdev= 0.00, samples=1 00:11:10.049 lat (usec) : 250=47.21%, 500=52.13%, 750=0.63% 00:11:10.049 lat (msec) : 50=0.03% 00:11:10.049 cpu : usr=2.70%, sys=5.40%, ctx=2866, majf=0, minf=2 00:11:10.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.049 issued rwts: total=1330,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.049 00:11:10.049 Run status group 0 (all jobs): 00:11:10.049 READ: bw=11.1MiB/s (11.6MB/s), 92.7KiB/s-5315KiB/s (94.9kB/s-5442kB/s), io=11.5MiB (12.0MB), run=1001-1036msec 00:11:10.049 WRITE: bw=15.4MiB/s (16.2MB/s), 1977KiB/s-6138KiB/s (2024kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1036msec 00:11:10.049 00:11:10.049 Disk stats (read/write): 00:11:10.049 nvme0n1: ios=626/1024, merge=0/0, ticks=549/280, in_queue=829, util=87.37% 00:11:10.049 nvme0n2: ios=34/512, merge=0/0, ticks=723/119, in_queue=842, util=86.66% 00:11:10.049 nvme0n3: ios=1054/1024, merge=0/0, ticks=804/289, in_queue=1093, util=97.59% 00:11:10.050 nvme0n4: ios=1024/1410, merge=0/0, ticks=420/309, in_queue=729, util=89.63% 00:11:10.050 07:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:10.050 [global] 00:11:10.050 thread=1 00:11:10.050 invalidate=1 00:11:10.050 rw=randwrite 00:11:10.050 time_based=1 00:11:10.050 runtime=1 00:11:10.050 ioengine=libaio 00:11:10.050 direct=1 00:11:10.050 bs=4096 00:11:10.050 iodepth=1 00:11:10.050 norandommap=0 00:11:10.050 numjobs=1 00:11:10.050 00:11:10.050 verify_dump=1 00:11:10.050 verify_backlog=512 00:11:10.050 verify_state_save=0 00:11:10.050 do_verify=1 00:11:10.050 verify=crc32c-intel 00:11:10.050 [job0] 00:11:10.050 filename=/dev/nvme0n1 00:11:10.050 [job1] 00:11:10.050 filename=/dev/nvme0n2 00:11:10.050 [job2] 00:11:10.050 filename=/dev/nvme0n3 00:11:10.050 [job3] 00:11:10.050 filename=/dev/nvme0n4 00:11:10.050 Could not set queue depth (nvme0n1) 00:11:10.050 Could not set queue depth (nvme0n2) 00:11:10.050 Could not set queue depth (nvme0n3) 00:11:10.050 Could not set queue depth (nvme0n4) 00:11:10.308 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.308 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.308 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.308 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.308 fio-3.35 00:11:10.308 Starting 4 threads 00:11:11.676 00:11:11.676 job0: (groupid=0, jobs=1): err= 0: pid=2406247: Thu Jul 25 07:16:43 2024 00:11:11.676 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:11:11.676 slat (nsec): min=8013, max=33485, avg=19880.33, stdev=8819.63 00:11:11.676 clat (usec): min=40892, max=42051, avg=41223.02, stdev=451.43 00:11:11.676 lat (usec): min=40907, max=42068, avg=41242.90, stdev=448.79 00:11:11.676 clat percentiles (usec): 00:11:11.676 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:11.676 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:11.676 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:11.676 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.676 | 99.99th=[42206] 00:11:11.676 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:11:11.676 slat (nsec): min=6776, max=45254, avg=15380.39, stdev=7652.60 00:11:11.676 clat (usec): min=185, max=416, avg=253.72, stdev=37.47 00:11:11.676 lat (usec): min=193, max=425, avg=269.10, stdev=36.68 00:11:11.676 clat percentiles (usec): 00:11:11.676 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:11:11.676 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 258], 00:11:11.676 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 318], 00:11:11.676 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 416], 99.95th=[ 416], 00:11:11.676 | 99.99th=[ 416] 00:11:11.676 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.676 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.676 lat (usec) : 250=50.28%, 500=45.78% 00:11:11.676 lat (msec) : 50=3.94% 00:11:11.676 cpu : usr=0.70%, sys=0.89%, ctx=533, majf=0, minf=2 00:11:11.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.676 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.676 job1: (groupid=0, jobs=1): err= 0: pid=2406248: Thu Jul 25 07:16:43 2024 00:11:11.676 read: IOPS=756, BW=3027KiB/s (3099kB/s)(3072KiB/1015msec) 00:11:11.676 slat (nsec): min=4539, max=67512, avg=18292.86, stdev=9801.66 00:11:11.676 clat (usec): min=295, max=41986, avg=984.19, stdev=4870.84 00:11:11.676 lat (usec): min=305, max=42002, avg=1002.48, stdev=4870.93 00:11:11.676 clat percentiles (usec): 00:11:11.676 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 334], 00:11:11.676 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:11:11.676 | 70.00th=[ 416], 80.00th=[ 474], 90.00th=[ 519], 95.00th=[ 562], 00:11:11.676 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:11.676 | 99.99th=[42206] 00:11:11.676 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:11:11.676 slat (nsec): min=6232, max=51112, avg=12475.09, stdev=5885.99 00:11:11.676 clat (usec): min=176, max=439, avg=219.18, stdev=29.32 00:11:11.676 lat (usec): min=184, max=483, avg=231.65, stdev=31.06 00:11:11.676 clat percentiles (usec): 00:11:11.676 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:11:11.676 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:11:11.676 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:11:11.676 | 99.00th=[ 330], 99.50th=[ 388], 99.90th=[ 392], 99.95th=[ 441], 00:11:11.676 | 99.99th=[ 441] 00:11:11.676 bw ( KiB/s): min= 8192, max= 8192, per=68.87%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.676 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.676 lat (usec) : 250=51.73%, 500=42.13%, 750=5.47%, 1000=0.06% 00:11:11.676 lat (msec) : 50=0.61% 00:11:11.676 cpu : usr=1.78%, sys=3.16%, ctx=1793, majf=0, minf=1 00:11:11.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.677 issued rwts: total=768,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.677 job2: (groupid=0, jobs=1): err= 0: pid=2406249: Thu Jul 25 07:16:43 2024 00:11:11.677 read: IOPS=357, BW=1430KiB/s (1465kB/s)(1456KiB/1018msec) 00:11:11.677 slat (nsec): min=7348, max=37595, avg=15740.33, stdev=6664.02 00:11:11.677 clat (usec): min=309, max=42017, avg=2379.90, stdev=8551.73 00:11:11.677 lat (usec): min=323, max=42033, avg=2395.64, stdev=8551.67 00:11:11.677 clat percentiles (usec): 00:11:11.677 | 1.00th=[ 314], 5.00th=[ 343], 10.00th=[ 363], 20.00th=[ 371], 00:11:11.677 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 404], 00:11:11.677 | 70.00th=[ 420], 80.00th=[ 449], 90.00th=[ 529], 95.00th=[11338], 00:11:11.677 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.677 | 99.99th=[42206] 00:11:11.677 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:11.677 slat (nsec): min=7848, max=51305, avg=17614.33, stdev=8713.19 00:11:11.677 clat (usec): min=195, max=492, avg=257.51, stdev=37.62 00:11:11.677 lat (usec): min=204, max=503, avg=275.13, stdev=40.13 00:11:11.677 clat percentiles (usec): 00:11:11.677 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:11:11.677 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 265], 00:11:11.677 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 322], 00:11:11.677 | 99.00th=[ 396], 99.50th=[ 441], 99.90th=[ 494], 99.95th=[ 494], 00:11:11.677 | 99.99th=[ 494] 00:11:11.677 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.677 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.677 lat (usec) : 250=26.48%, 500=68.72%, 750=2.28%, 1000=0.11% 00:11:11.677 lat (msec) : 4=0.23%, 20=0.11%, 50=2.05% 00:11:11.677 cpu : usr=0.88%, sys=2.06%, ctx=877, majf=0, minf=1 00:11:11.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.677 issued rwts: total=364,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.677 job3: (groupid=0, jobs=1): err= 0: pid=2406250: Thu Jul 25 07:16:43 2024 00:11:11.677 read: IOPS=748, BW=2993KiB/s (3065kB/s)(3092KiB/1033msec) 00:11:11.677 slat (nsec): min=5503, max=56982, avg=19182.28, stdev=9365.89 00:11:11.677 clat (usec): min=303, max=41950, avg=878.79, stdev=4366.58 00:11:11.677 lat (usec): min=312, max=41966, avg=897.97, stdev=4366.21 00:11:11.677 clat percentiles (usec): 00:11:11.677 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 359], 00:11:11.677 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 400], 00:11:11.677 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 506], 00:11:11.677 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:11.677 | 99.99th=[42206] 00:11:11.677 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:11:11.677 slat (nsec): min=6496, max=57058, avg=18732.52, stdev=9380.43 00:11:11.677 clat (usec): min=172, max=1506, avg=301.18, stdev=110.44 00:11:11.677 lat (usec): min=189, max=1531, avg=319.92, stdev=113.65 00:11:11.677 clat percentiles (usec): 00:11:11.677 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 217], 00:11:11.677 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 258], 60.00th=[ 302], 00:11:11.677 | 70.00th=[ 338], 80.00th=[ 388], 90.00th=[ 453], 95.00th=[ 502], 00:11:11.677 | 99.00th=[ 594], 99.50th=[ 660], 99.90th=[ 1205], 99.95th=[ 1500], 00:11:11.677 | 99.99th=[ 1500] 00:11:11.677 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=2 00:11:11.677 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:11.677 lat (usec) : 250=26.99%, 500=67.45%, 750=4.51%, 1000=0.39% 00:11:11.677 lat (msec) : 2=0.17%, 50=0.50% 00:11:11.677 cpu : usr=1.94%, sys=4.17%, ctx=1797, majf=0, minf=1 00:11:11.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.677 issued rwts: total=773,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.677 00:11:11.677 Run status group 0 (all jobs): 00:11:11.677 READ: bw=7458KiB/s (7637kB/s), 83.4KiB/s-3027KiB/s (85.4kB/s-3099kB/s), io=7704KiB (7889kB), run=1007-1033msec 00:11:11.677 WRITE: bw=11.6MiB/s (12.2MB/s), 2012KiB/s-4035KiB/s (2060kB/s-4132kB/s), io=12.0MiB (12.6MB), run=1007-1033msec 00:11:11.677 00:11:11.677 Disk stats (read/write): 00:11:11.677 nvme0n1: ios=67/512, merge=0/0, ticks=733/121, in_queue=854, util=87.27% 00:11:11.677 nvme0n2: ios=808/1024, merge=0/0, ticks=1505/217, in_queue=1722, util=97.05% 00:11:11.677 nvme0n3: ios=407/512, merge=0/0, ticks=1117/121, in_queue=1238, util=99.48% 00:11:11.677 nvme0n4: ios=745/1024, merge=0/0, ticks=923/281, in_queue=1204, util=95.47% 00:11:11.677 07:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:11.677 [global] 00:11:11.677 thread=1 00:11:11.677 invalidate=1 00:11:11.677 rw=write 00:11:11.677 time_based=1 00:11:11.677 runtime=1 00:11:11.677 ioengine=libaio 00:11:11.677 direct=1 00:11:11.677 bs=4096 00:11:11.677 iodepth=128 00:11:11.677 norandommap=0 00:11:11.677 numjobs=1 00:11:11.677 00:11:11.677 verify_dump=1 00:11:11.677 verify_backlog=512 00:11:11.677 verify_state_save=0 00:11:11.677 do_verify=1 00:11:11.677 verify=crc32c-intel 00:11:11.677 [job0] 00:11:11.677 filename=/dev/nvme0n1 00:11:11.677 [job1] 00:11:11.677 filename=/dev/nvme0n2 00:11:11.677 [job2] 00:11:11.677 filename=/dev/nvme0n3 00:11:11.677 [job3] 00:11:11.677 filename=/dev/nvme0n4 00:11:11.677 Could not set queue depth (nvme0n1) 00:11:11.677 Could not set queue depth (nvme0n2) 00:11:11.677 Could not set queue depth (nvme0n3) 00:11:11.677 Could not set queue depth (nvme0n4) 00:11:11.677 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.677 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.677 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.677 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.677 fio-3.35 00:11:11.677 Starting 4 threads 00:11:13.050 00:11:13.050 job0: (groupid=0, jobs=1): err= 0: pid=2406474: Thu Jul 25 07:16:45 2024 00:11:13.050 read: IOPS=3213, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1006msec) 00:11:13.050 slat (usec): min=2, max=10077, avg=139.26, stdev=753.42 00:11:13.050 clat (usec): min=611, max=33867, avg=17870.36, stdev=4249.66 00:11:13.050 lat (usec): min=10688, max=33905, avg=18009.62, stdev=4278.68 00:11:13.050 clat percentiles (usec): 00:11:13.050 | 1.00th=[10945], 5.00th=[13173], 10.00th=[13829], 20.00th=[14615], 00:11:13.050 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16057], 60.00th=[17957], 00:11:13.050 | 70.00th=[19268], 80.00th=[21627], 90.00th=[24511], 95.00th=[25822], 00:11:13.050 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31589], 99.95th=[32113], 00:11:13.050 | 99.99th=[33817] 00:11:13.050 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:11:13.050 slat (usec): min=3, max=15237, avg=144.37, stdev=847.05 00:11:13.050 clat (usec): min=10685, max=42691, avg=19340.43, stdev=4786.52 00:11:13.050 lat (usec): min=10694, max=42736, avg=19484.81, stdev=4859.87 00:11:13.050 clat percentiles (usec): 00:11:13.050 | 1.00th=[11731], 5.00th=[13698], 10.00th=[14877], 20.00th=[15533], 00:11:13.050 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17695], 60.00th=[18744], 00:11:13.050 | 70.00th=[21103], 80.00th=[22676], 90.00th=[27395], 95.00th=[29754], 00:11:13.050 | 99.00th=[31065], 99.50th=[32900], 99.90th=[37487], 99.95th=[40633], 00:11:13.050 | 99.99th=[42730] 00:11:13.050 bw ( KiB/s): min=13432, max=15240, per=23.72%, avg=14336.00, stdev=1278.45, samples=2 00:11:13.050 iops : min= 3358, max= 3810, avg=3584.00, stdev=319.61, samples=2 00:11:13.050 lat (usec) : 750=0.01% 00:11:13.050 lat (msec) : 20=71.66%, 50=28.33% 00:11:13.050 cpu : usr=4.68%, sys=7.06%, ctx=388, majf=0, minf=11 00:11:13.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:13.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.050 issued rwts: total=3233,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.050 job1: (groupid=0, jobs=1): err= 0: pid=2406475: Thu Jul 25 07:16:45 2024 00:11:13.050 read: IOPS=6056, BW=23.7MiB/s (24.8MB/s)(23.8MiB/1005msec) 00:11:13.050 slat (usec): min=2, max=7021, avg=80.87, stdev=475.08 00:11:13.050 clat (usec): min=1482, max=19531, avg=10705.09, stdev=1719.00 00:11:13.050 lat (usec): min=4946, max=19646, avg=10785.97, stdev=1736.33 00:11:13.050 clat percentiles (usec): 00:11:13.050 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9896], 00:11:13.050 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:11:13.050 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12256], 95.00th=[13698], 00:11:13.050 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:11:13.050 | 99.99th=[19530] 00:11:13.050 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:11:13.050 slat (usec): min=4, max=7794, avg=70.22, stdev=391.04 00:11:13.050 clat (usec): min=1548, max=41288, avg=10136.64, stdev=2384.64 00:11:13.050 lat (usec): min=1570, max=41294, avg=10206.86, stdev=2388.92 00:11:13.050 clat percentiles (usec): 00:11:13.050 | 1.00th=[ 4146], 5.00th=[ 6718], 10.00th=[ 8717], 20.00th=[ 9372], 00:11:13.050 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:11:13.051 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11338], 95.00th=[12518], 00:11:13.051 | 99.00th=[19530], 99.50th=[25297], 99.90th=[36439], 99.95th=[41157], 00:11:13.051 | 99.99th=[41157] 00:11:13.051 bw ( KiB/s): min=24526, max=24576, per=40.62%, avg=24551.00, stdev=35.36, samples=2 00:11:13.051 iops : min= 6131, max= 6144, avg=6137.50, stdev= 9.19, samples=2 00:11:13.051 lat (msec) : 2=0.13%, 4=0.33%, 10=34.23%, 20=65.00%, 50=0.31% 00:11:13.051 cpu : usr=7.37%, sys=13.45%, ctx=503, majf=0, minf=11 00:11:13.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:13.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.051 issued rwts: total=6087,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.051 job2: (groupid=0, jobs=1): err= 0: pid=2406476: Thu Jul 25 07:16:45 2024 00:11:13.051 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:13.051 slat (usec): min=3, max=27186, avg=177.48, stdev=1290.74 00:11:13.051 clat (usec): min=4838, max=74248, avg=23644.21, stdev=14805.75 00:11:13.051 lat (usec): min=4847, max=74261, avg=23821.69, stdev=14925.40 00:11:13.051 clat percentiles (usec): 00:11:13.051 | 1.00th=[ 5211], 5.00th=[11338], 10.00th=[13566], 20.00th=[14353], 00:11:13.051 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15664], 60.00th=[17171], 00:11:13.051 | 70.00th=[24511], 80.00th=[35390], 90.00th=[47973], 95.00th=[55837], 00:11:13.051 | 99.00th=[65799], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:11:13.051 | 99.99th=[73925] 00:11:13.051 write: IOPS=2556, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1004msec); 0 zone resets 00:11:13.051 slat (usec): min=4, max=25494, avg=202.18, stdev=1476.25 00:11:13.051 clat (usec): min=1215, max=69723, avg=25843.50, stdev=12616.71 00:11:13.051 lat (usec): min=4764, max=69744, avg=26045.68, stdev=12743.34 00:11:13.051 clat percentiles (usec): 00:11:13.051 | 1.00th=[11076], 5.00th=[12911], 10.00th=[13435], 20.00th=[13566], 00:11:13.051 | 30.00th=[15008], 40.00th=[16581], 50.00th=[24249], 60.00th=[27395], 00:11:13.051 | 70.00th=[31589], 80.00th=[36963], 90.00th=[44303], 95.00th=[49546], 00:11:13.051 | 99.00th=[59507], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:11:13.051 | 99.99th=[69731] 00:11:13.051 bw ( KiB/s): min= 9748, max=10712, per=16.93%, avg=10230.00, stdev=681.65, samples=2 00:11:13.051 iops : min= 2437, max= 2678, avg=2557.50, stdev=170.41, samples=2 00:11:13.051 lat (msec) : 2=0.02%, 10=2.54%, 20=52.49%, 50=37.78%, 100=7.18% 00:11:13.051 cpu : usr=4.69%, sys=4.49%, ctx=198, majf=0, minf=17 00:11:13.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:13.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.051 issued rwts: total=2560,2567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.051 job3: (groupid=0, jobs=1): err= 0: pid=2406477: Thu Jul 25 07:16:45 2024 00:11:13.051 read: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(12.1MiB/1051msec) 00:11:13.051 slat (usec): min=3, max=15191, avg=136.99, stdev=963.94 00:11:13.051 clat (usec): min=10419, max=61575, avg=17432.96, stdev=6101.94 00:11:13.051 lat (usec): min=10434, max=61592, avg=17569.95, stdev=6192.80 00:11:13.051 clat percentiles (usec): 00:11:13.051 | 1.00th=[11076], 5.00th=[11994], 10.00th=[13042], 20.00th=[14222], 00:11:13.051 | 30.00th=[14877], 40.00th=[15795], 50.00th=[16319], 60.00th=[16712], 00:11:13.051 | 70.00th=[17433], 80.00th=[18482], 90.00th=[21627], 95.00th=[26084], 00:11:13.051 | 99.00th=[51119], 99.50th=[54789], 99.90th=[61604], 99.95th=[61604], 00:11:13.051 | 99.99th=[61604] 00:11:13.051 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec); 0 zone resets 00:11:13.051 slat (usec): min=5, max=13495, avg=150.82, stdev=859.95 00:11:13.051 clat (usec): min=1300, max=62617, avg=22162.51, stdev=14354.48 00:11:13.051 lat (usec): min=1311, max=62635, avg=22313.33, stdev=14426.46 00:11:13.051 clat percentiles (usec): 00:11:13.051 | 1.00th=[ 7439], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[11600], 00:11:13.051 | 30.00th=[12649], 40.00th=[13960], 50.00th=[14746], 60.00th=[19006], 00:11:13.051 | 70.00th=[23987], 80.00th=[36439], 90.00th=[49021], 95.00th=[53216], 00:11:13.051 | 99.00th=[59507], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:11:13.051 | 99.99th=[62653] 00:11:13.051 bw ( KiB/s): min=12680, max=15057, per=22.95%, avg=13868.50, stdev=1680.79, samples=2 00:11:13.051 iops : min= 3170, max= 3764, avg=3467.00, stdev=420.02, samples=2 00:11:13.051 lat (msec) : 2=0.03%, 10=3.40%, 20=70.19%, 50=21.54%, 100=4.83% 00:11:13.051 cpu : usr=4.19%, sys=7.24%, ctx=278, majf=0, minf=11 00:11:13.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:13.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.051 issued rwts: total=3086,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.051 00:11:13.051 Run status group 0 (all jobs): 00:11:13.051 READ: bw=55.6MiB/s (58.3MB/s), 9.96MiB/s-23.7MiB/s (10.4MB/s-24.8MB/s), io=58.5MiB (61.3MB), run=1004-1051msec 00:11:13.051 WRITE: bw=59.0MiB/s (61.9MB/s), 9.99MiB/s-23.9MiB/s (10.5MB/s-25.0MB/s), io=62.0MiB (65.0MB), run=1004-1051msec 00:11:13.051 00:11:13.051 Disk stats (read/write): 00:11:13.051 nvme0n1: ios=2672/3072, merge=0/0, ticks=23291/28059, in_queue=51350, util=87.27% 00:11:13.051 nvme0n2: ios=5172/5311, merge=0/0, ticks=26800/26695, in_queue=53495, util=97.97% 00:11:13.051 nvme0n3: ios=2091/2223, merge=0/0, ticks=26562/25264, in_queue=51826, util=96.76% 00:11:13.051 nvme0n4: ios=2608/2919, merge=0/0, ticks=43627/60233, in_queue=103860, util=98.74% 00:11:13.051 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:13.051 [global] 00:11:13.051 thread=1 00:11:13.051 invalidate=1 00:11:13.051 rw=randwrite 00:11:13.051 time_based=1 00:11:13.051 runtime=1 00:11:13.051 ioengine=libaio 00:11:13.051 direct=1 00:11:13.051 bs=4096 00:11:13.051 iodepth=128 00:11:13.051 norandommap=0 00:11:13.051 numjobs=1 00:11:13.051 00:11:13.051 verify_dump=1 00:11:13.051 verify_backlog=512 00:11:13.051 verify_state_save=0 00:11:13.051 do_verify=1 00:11:13.051 verify=crc32c-intel 00:11:13.051 [job0] 00:11:13.051 filename=/dev/nvme0n1 00:11:13.051 [job1] 00:11:13.051 filename=/dev/nvme0n2 00:11:13.051 [job2] 00:11:13.051 filename=/dev/nvme0n3 00:11:13.051 [job3] 00:11:13.051 filename=/dev/nvme0n4 00:11:13.051 Could not set queue depth (nvme0n1) 00:11:13.051 Could not set queue depth (nvme0n2) 00:11:13.051 Could not set queue depth (nvme0n3) 00:11:13.051 Could not set queue depth (nvme0n4) 00:11:13.312 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.312 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.312 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.312 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.312 fio-3.35 00:11:13.312 Starting 4 threads 00:11:14.686 00:11:14.686 job0: (groupid=0, jobs=1): err= 0: pid=2406832: Thu Jul 25 07:16:46 2024 00:11:14.686 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:11:14.686 slat (usec): min=2, max=28937, avg=244.33, stdev=1957.43 00:11:14.686 clat (msec): min=9, max=101, avg=31.38, stdev=20.51 00:11:14.686 lat (msec): min=9, max=101, avg=31.62, stdev=20.68 00:11:14.686 clat percentiles (msec): 00:11:14.686 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:11:14.686 | 30.00th=[ 13], 40.00th=[ 21], 50.00th=[ 26], 60.00th=[ 31], 00:11:14.686 | 70.00th=[ 42], 80.00th=[ 53], 90.00th=[ 63], 95.00th=[ 67], 00:11:14.686 | 99.00th=[ 79], 99.50th=[ 79], 99.90th=[ 87], 99.95th=[ 88], 00:11:14.686 | 99.99th=[ 102] 00:11:14.686 write: IOPS=2456, BW=9827KiB/s (10.1MB/s)(9896KiB/1007msec); 0 zone resets 00:11:14.686 slat (usec): min=3, max=25488, avg=190.37, stdev=1219.58 00:11:14.686 clat (usec): min=4598, max=85291, avg=24365.40, stdev=15648.67 00:11:14.686 lat (usec): min=5015, max=85322, avg=24555.77, stdev=15768.05 00:11:14.686 clat percentiles (usec): 00:11:14.686 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 7635], 20.00th=[10683], 00:11:14.686 | 30.00th=[14353], 40.00th=[19530], 50.00th=[21365], 60.00th=[22152], 00:11:14.686 | 70.00th=[25822], 80.00th=[35914], 90.00th=[49021], 95.00th=[59507], 00:11:14.686 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[76022], 00:11:14.686 | 99.99th=[85459] 00:11:14.686 bw ( KiB/s): min= 8184, max=10592, per=16.81%, avg=9388.00, stdev=1702.71, samples=2 00:11:14.686 iops : min= 2046, max= 2648, avg=2347.00, stdev=425.68, samples=2 00:11:14.686 lat (msec) : 10=15.04%, 20=27.09%, 50=42.70%, 100=15.15%, 250=0.02% 00:11:14.686 cpu : usr=2.78%, sys=3.88%, ctx=190, majf=0, minf=1 00:11:14.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:14.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.686 issued rwts: total=2048,2474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.686 job1: (groupid=0, jobs=1): err= 0: pid=2406833: Thu Jul 25 07:16:46 2024 00:11:14.686 read: IOPS=5351, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec) 00:11:14.686 slat (usec): min=2, max=29886, avg=98.83, stdev=901.53 00:11:14.686 clat (usec): min=1991, max=94192, avg=12894.41, stdev=12449.46 00:11:14.686 lat (usec): min=1996, max=95279, avg=12993.24, stdev=12549.82 00:11:14.686 clat percentiles (usec): 00:11:14.686 | 1.00th=[ 4015], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[ 9110], 00:11:14.686 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:14.686 | 70.00th=[10290], 80.00th=[10814], 90.00th=[13304], 95.00th=[41681], 00:11:14.686 | 99.00th=[72877], 99.50th=[80217], 99.90th=[92799], 99.95th=[92799], 00:11:14.686 | 99.99th=[93848] 00:11:14.686 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:14.686 slat (usec): min=3, max=8034, avg=69.55, stdev=405.68 00:11:14.686 clat (usec): min=782, max=94193, avg=10263.19, stdev=6461.71 00:11:14.686 lat (usec): min=789, max=94197, avg=10332.74, stdev=6466.50 00:11:14.686 clat percentiles (usec): 00:11:14.686 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 7898], 20.00th=[ 8586], 00:11:14.686 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:14.686 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[11076], 95.00th=[14222], 00:11:14.686 | 99.00th=[47973], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:11:14.686 | 99.99th=[93848] 00:11:14.686 bw ( KiB/s): min=17328, max=27728, per=40.34%, avg=22528.00, stdev=7353.91, samples=2 00:11:14.686 iops : min= 4332, max= 6932, avg=5632.00, stdev=1838.48, samples=2 00:11:14.686 lat (usec) : 1000=0.03% 00:11:14.686 lat (msec) : 2=0.04%, 4=0.02%, 10=71.74%, 20=23.51%, 50=2.27% 00:11:14.686 lat (msec) : 100=2.40% 00:11:14.686 cpu : usr=7.68%, sys=9.27%, ctx=397, majf=0, minf=1 00:11:14.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:14.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.686 issued rwts: total=5373,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.686 job2: (groupid=0, jobs=1): err= 0: pid=2406834: Thu Jul 25 07:16:46 2024 00:11:14.686 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:11:14.686 slat (usec): min=2, max=13635, avg=118.82, stdev=847.19 00:11:14.686 clat (usec): min=1231, max=60459, avg=14414.59, stdev=8000.35 00:11:14.686 lat (usec): min=1236, max=60466, avg=14533.40, stdev=8089.81 00:11:14.686 clat percentiles (usec): 00:11:14.686 | 1.00th=[ 3261], 5.00th=[ 5014], 10.00th=[ 7242], 20.00th=[11994], 00:11:14.686 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12911], 60.00th=[13435], 00:11:14.686 | 70.00th=[14615], 80.00th=[14877], 90.00th=[20317], 95.00th=[32900], 00:11:14.686 | 99.00th=[53216], 99.50th=[58983], 99.90th=[60556], 99.95th=[60556], 00:11:14.686 | 99.99th=[60556] 00:11:14.686 write: IOPS=3454, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1011msec); 0 zone resets 00:11:14.686 slat (usec): min=3, max=25907, avg=175.57, stdev=1144.43 00:11:14.686 clat (usec): min=455, max=88166, avg=25599.77, stdev=21457.73 00:11:14.686 lat (usec): min=473, max=88176, avg=25775.35, stdev=21596.55 00:11:14.686 clat percentiles (usec): 00:11:14.686 | 1.00th=[ 1287], 5.00th=[ 2737], 10.00th=[ 4621], 20.00th=[ 8029], 00:11:14.686 | 30.00th=[11469], 40.00th=[13566], 50.00th=[17433], 60.00th=[22414], 00:11:14.686 | 70.00th=[33817], 80.00th=[46400], 90.00th=[61080], 95.00th=[66323], 00:11:14.686 | 99.00th=[83362], 99.50th=[85459], 99.90th=[88605], 99.95th=[88605], 00:11:14.686 | 99.99th=[88605] 00:11:14.686 bw ( KiB/s): min=12272, max=14640, per=24.09%, avg=13456.00, stdev=1674.43, samples=2 00:11:14.686 iops : min= 3068, max= 3660, avg=3364.00, stdev=418.61, samples=2 00:11:14.686 lat (usec) : 500=0.03%, 750=0.08%, 1000=0.12% 00:11:14.686 lat (msec) : 2=1.11%, 4=4.61%, 10=15.50%, 20=47.42%, 50=20.89% 00:11:14.686 lat (msec) : 100=10.24% 00:11:14.686 cpu : usr=2.08%, sys=4.46%, ctx=293, majf=0, minf=1 00:11:14.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:14.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.686 issued rwts: total=2560,3492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.686 job3: (groupid=0, jobs=1): err= 0: pid=2406835: Thu Jul 25 07:16:46 2024 00:11:14.686 read: IOPS=2049, BW=8197KiB/s (8394kB/s)(8312KiB/1014msec) 00:11:14.686 slat (usec): min=3, max=12715, avg=147.66, stdev=917.57 00:11:14.686 clat (usec): min=8565, max=50159, avg=16808.27, stdev=6411.24 00:11:14.686 lat (usec): min=8574, max=50173, avg=16955.94, stdev=6497.57 00:11:14.686 clat percentiles (usec): 00:11:14.686 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[12780], 00:11:14.686 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15270], 60.00th=[16188], 00:11:14.686 | 70.00th=[16581], 80.00th=[18220], 90.00th=[25035], 95.00th=[31851], 00:11:14.686 | 99.00th=[43779], 99.50th=[45351], 99.90th=[50070], 99.95th=[50070], 00:11:14.686 | 99.99th=[50070] 00:11:14.687 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec); 0 zone resets 00:11:14.687 slat (usec): min=4, max=13092, avg=261.58, stdev=1113.36 00:11:14.687 clat (msec): min=4, max=114, avg=36.55, stdev=23.92 00:11:14.687 lat (msec): min=4, max=114, avg=36.81, stdev=24.08 00:11:14.687 clat percentiles (msec): 00:11:14.687 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:11:14.687 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 32], 60.00th=[ 37], 00:11:14.687 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 68], 95.00th=[ 84], 00:11:14.687 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:11:14.687 | 99.99th=[ 114] 00:11:14.687 bw ( KiB/s): min= 9480, max=10224, per=17.64%, avg=9852.00, stdev=526.09, samples=2 00:11:14.687 iops : min= 2370, max= 2556, avg=2463.00, stdev=131.52, samples=2 00:11:14.687 lat (msec) : 10=2.80%, 20=50.45%, 50=31.37%, 100=13.37%, 250=2.01% 00:11:14.687 cpu : usr=2.57%, sys=5.33%, ctx=301, majf=0, minf=1 00:11:14.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:14.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.687 issued rwts: total=2078,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.687 00:11:14.687 Run status group 0 (all jobs): 00:11:14.687 READ: bw=46.5MiB/s (48.7MB/s), 8135KiB/s-20.9MiB/s (8330kB/s-21.9MB/s), io=47.1MiB (49.4MB), run=1004-1014msec 00:11:14.687 WRITE: bw=54.5MiB/s (57.2MB/s), 9827KiB/s-21.9MiB/s (10.1MB/s-23.0MB/s), io=55.3MiB (58.0MB), run=1004-1014msec 00:11:14.687 00:11:14.687 Disk stats (read/write): 00:11:14.687 nvme0n1: ios=1920/2048, merge=0/0, ticks=25638/21308, in_queue=46946, util=98.30% 00:11:14.687 nvme0n2: ios=4996/5120, merge=0/0, ticks=27976/21882, in_queue=49858, util=99.39% 00:11:14.687 nvme0n3: ios=2092/2253, merge=0/0, ticks=31520/58538, in_queue=90058, util=99.58% 00:11:14.687 nvme0n4: ios=1593/2039, merge=0/0, ticks=25066/79722, in_queue=104788, util=98.11% 00:11:14.687 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:14.687 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2406971 00:11:14.687 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:14.687 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:14.687 [global] 00:11:14.687 thread=1 00:11:14.687 invalidate=1 00:11:14.687 rw=read 00:11:14.687 time_based=1 00:11:14.687 runtime=10 00:11:14.687 ioengine=libaio 00:11:14.687 direct=1 00:11:14.687 bs=4096 00:11:14.687 iodepth=1 00:11:14.687 norandommap=1 00:11:14.687 numjobs=1 00:11:14.687 00:11:14.687 [job0] 00:11:14.687 filename=/dev/nvme0n1 00:11:14.687 [job1] 00:11:14.687 filename=/dev/nvme0n2 00:11:14.687 [job2] 00:11:14.687 filename=/dev/nvme0n3 00:11:14.687 [job3] 00:11:14.687 filename=/dev/nvme0n4 00:11:14.687 Could not set queue depth (nvme0n1) 00:11:14.687 Could not set queue depth (nvme0n2) 00:11:14.687 Could not set queue depth (nvme0n3) 00:11:14.687 Could not set queue depth (nvme0n4) 00:11:14.687 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.687 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.687 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.687 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.687 fio-3.35 00:11:14.687 Starting 4 threads 00:11:17.967 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:17.967 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:17.967 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=23556096, buflen=4096 00:11:17.967 fio: pid=2407062, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:17.967 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.967 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:17.967 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=28336128, buflen=4096 00:11:17.967 fio: pid=2407061, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:18.225 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=34525184, buflen=4096 00:11:18.225 fio: pid=2407059, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:18.225 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.225 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:18.484 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.484 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:18.484 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=33513472, buflen=4096 00:11:18.484 fio: pid=2407060, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:18.484 00:11:18.484 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2407059: Thu Jul 25 07:16:51 2024 00:11:18.484 read: IOPS=2449, BW=9798KiB/s (10.0MB/s)(32.9MiB/3441msec) 00:11:18.484 slat (usec): min=4, max=11589, avg=18.27, stdev=202.72 00:11:18.484 clat (usec): min=262, max=20699, avg=383.72, stdev=236.29 00:11:18.484 lat (usec): min=270, max=20706, avg=401.99, stdev=314.76 00:11:18.484 clat percentiles (usec): 00:11:18.484 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:11:18.484 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 375], 00:11:18.484 | 70.00th=[ 396], 80.00th=[ 437], 90.00th=[ 494], 95.00th=[ 537], 00:11:18.484 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 938], 99.95th=[ 1090], 00:11:18.484 | 99.99th=[20579] 00:11:18.484 bw ( KiB/s): min= 8344, max=10712, per=31.09%, avg=9824.00, stdev=800.70, samples=6 00:11:18.484 iops : min= 2086, max= 2678, avg=2456.00, stdev=200.18, samples=6 00:11:18.484 lat (usec) : 500=91.21%, 750=8.59%, 1000=0.11% 00:11:18.484 lat (msec) : 2=0.06%, 4=0.01%, 50=0.01% 00:11:18.484 cpu : usr=2.06%, sys=5.12%, ctx=8438, majf=0, minf=1 00:11:18.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 issued rwts: total=8430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.485 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2407060: Thu Jul 25 07:16:51 2024 00:11:18.485 read: IOPS=2208, BW=8831KiB/s (9043kB/s)(32.0MiB/3706msec) 00:11:18.485 slat (usec): min=5, max=16894, avg=16.85, stdev=237.81 00:11:18.485 clat (usec): min=253, max=42109, avg=429.69, stdev=2121.75 00:11:18.485 lat (usec): min=259, max=57981, avg=445.45, stdev=2194.44 00:11:18.485 clat percentiles (usec): 00:11:18.485 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 293], 00:11:18.485 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:11:18.485 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 367], 00:11:18.485 | 99.00th=[ 474], 99.50th=[ 619], 99.90th=[41157], 99.95th=[42206], 00:11:18.485 | 99.99th=[42206] 00:11:18.485 bw ( KiB/s): min= 93, max=12592, per=29.57%, avg=9345.86, stdev=4803.55, samples=7 00:11:18.485 iops : min= 23, max= 3148, avg=2336.43, stdev=1200.97, samples=7 00:11:18.485 lat (usec) : 500=99.14%, 750=0.38%, 1000=0.15% 00:11:18.485 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 50=0.27% 00:11:18.485 cpu : usr=2.16%, sys=4.18%, ctx=8186, majf=0, minf=1 00:11:18.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 issued rwts: total=8183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.485 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2407061: Thu Jul 25 07:16:51 2024 00:11:18.485 read: IOPS=2160, BW=8642KiB/s (8850kB/s)(27.0MiB/3202msec) 00:11:18.485 slat (nsec): min=4941, max=72009, avg=14966.30, stdev=9526.55 00:11:18.485 clat (usec): min=268, max=45621, avg=440.87, stdev=1404.18 00:11:18.485 lat (usec): min=275, max=45628, avg=455.83, stdev=1404.19 00:11:18.485 clat percentiles (usec): 00:11:18.485 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 355], 00:11:18.485 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 396], 00:11:18.485 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 457], 95.00th=[ 490], 00:11:18.485 | 99.00th=[ 545], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:11:18.485 | 99.99th=[45876] 00:11:18.485 bw ( KiB/s): min= 7176, max=10360, per=29.17%, avg=9217.33, stdev=1073.28, samples=6 00:11:18.485 iops : min= 1794, max= 2590, avg=2304.33, stdev=268.32, samples=6 00:11:18.485 lat (usec) : 500=96.21%, 750=3.31%, 1000=0.26% 00:11:18.485 lat (msec) : 2=0.09%, 50=0.12% 00:11:18.485 cpu : usr=1.66%, sys=4.65%, ctx=6919, majf=0, minf=1 00:11:18.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 issued rwts: total=6919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.485 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2407062: Thu Jul 25 07:16:51 2024 00:11:18.485 read: IOPS=1976, BW=7905KiB/s (8095kB/s)(22.5MiB/2910msec) 00:11:18.485 slat (nsec): min=4772, max=75558, avg=21463.82, stdev=10955.73 00:11:18.485 clat (usec): min=271, max=41315, avg=475.41, stdev=812.01 00:11:18.485 lat (usec): min=279, max=41347, avg=496.88, stdev=812.21 00:11:18.485 clat percentiles (usec): 00:11:18.485 | 1.00th=[ 310], 5.00th=[ 343], 10.00th=[ 371], 20.00th=[ 400], 00:11:18.485 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 474], 00:11:18.485 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 562], 00:11:18.485 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 1369], 99.95th=[20579], 00:11:18.485 | 99.99th=[41157] 00:11:18.485 bw ( KiB/s): min= 7560, max= 8248, per=25.00%, avg=7900.80, stdev=321.18, samples=5 00:11:18.485 iops : min= 1890, max= 2062, avg=1975.20, stdev=80.29, samples=5 00:11:18.485 lat (usec) : 500=74.50%, 750=25.31%, 1000=0.05% 00:11:18.485 lat (msec) : 2=0.05%, 10=0.02%, 50=0.05% 00:11:18.485 cpu : usr=1.93%, sys=4.81%, ctx=5752, majf=0, minf=1 00:11:18.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.485 issued rwts: total=5752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.485 00:11:18.485 Run status group 0 (all jobs): 00:11:18.485 READ: bw=30.9MiB/s (32.4MB/s), 7905KiB/s-9798KiB/s (8095kB/s-10.0MB/s), io=114MiB (120MB), run=2910-3706msec 00:11:18.485 00:11:18.485 Disk stats (read/write): 00:11:18.485 nvme0n1: ios=8249/0, merge=0/0, ticks=4056/0, in_queue=4056, util=98.23% 00:11:18.485 nvme0n2: ios=8180/0, merge=0/0, ticks=3198/0, in_queue=3198, util=95.90% 00:11:18.485 nvme0n3: ios=6916/0, merge=0/0, ticks=2838/0, in_queue=2838, util=96.79% 00:11:18.485 nvme0n4: ios=5665/0, merge=0/0, ticks=2574/0, in_queue=2574, util=96.74% 00:11:18.743 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.743 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:19.001 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.001 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:19.259 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.259 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:19.517 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.517 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:19.775 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:19.775 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2406971 00:11:19.775 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:19.775 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:20.033 nvmf hotplug test: fio failed as expected 00:11:20.033 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.290 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:20.290 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:20.290 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:20.290 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.291 rmmod nvme_tcp 00:11:20.291 rmmod nvme_fabrics 00:11:20.291 rmmod nvme_keyring 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2404870 ']' 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2404870 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2404870 ']' 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2404870 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2404870 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2404870' 00:11:20.291 killing process with pid 2404870 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2404870 00:11:20.291 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2404870 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.549 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.079 00:11:23.079 real 0m23.996s 00:11:23.079 user 1m21.734s 00:11:23.079 sys 0m8.266s 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.079 ************************************ 00:11:23.079 END TEST nvmf_fio_target 00:11:23.079 ************************************ 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.079 ************************************ 00:11:23.079 START TEST nvmf_bdevio 00:11:23.079 ************************************ 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:23.079 * Looking for test storage... 00:11:23.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.079 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:24.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.981 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:24.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:24.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:24.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:11:24.982 00:11:24.982 --- 10.0.0.2 ping statistics --- 00:11:24.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.982 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:11:24.982 00:11:24.982 --- 10.0.0.1 ping statistics --- 00:11:24.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.982 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2409685 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2409685 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2409685 ']' 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.982 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.982 [2024-07-25 07:16:57.359978] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:24.982 [2024-07-25 07:16:57.360076] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.982 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.982 [2024-07-25 07:16:57.425649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.240 [2024-07-25 07:16:57.540980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.240 [2024-07-25 07:16:57.541030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.241 [2024-07-25 07:16:57.541058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.241 [2024-07-25 07:16:57.541069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.241 [2024-07-25 07:16:57.541079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.241 [2024-07-25 07:16:57.541164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:25.241 [2024-07-25 07:16:57.541190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:25.241 [2024-07-25 07:16:57.541251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:25.241 [2024-07-25 07:16:57.541255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 [2024-07-25 07:16:57.697719] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 Malloc0 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 [2024-07-25 07:16:57.751500] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:25.241 { 00:11:25.241 "params": { 00:11:25.241 "name": "Nvme$subsystem", 00:11:25.241 "trtype": "$TEST_TRANSPORT", 00:11:25.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.241 "adrfam": "ipv4", 00:11:25.241 "trsvcid": "$NVMF_PORT", 00:11:25.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.241 "hdgst": ${hdgst:-false}, 00:11:25.241 "ddgst": ${ddgst:-false} 00:11:25.241 }, 00:11:25.241 "method": "bdev_nvme_attach_controller" 00:11:25.241 } 00:11:25.241 EOF 00:11:25.241 )") 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:25.241 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:25.241 "params": { 00:11:25.241 "name": "Nvme1", 00:11:25.241 "trtype": "tcp", 00:11:25.241 "traddr": "10.0.0.2", 00:11:25.241 "adrfam": "ipv4", 00:11:25.241 "trsvcid": "4420", 00:11:25.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.241 "hdgst": false, 00:11:25.241 "ddgst": false 00:11:25.241 }, 00:11:25.241 "method": "bdev_nvme_attach_controller" 00:11:25.241 }' 00:11:25.499 [2024-07-25 07:16:57.800613] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:25.499 [2024-07-25 07:16:57.800688] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409716 ] 00:11:25.499 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.499 [2024-07-25 07:16:57.864213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:25.499 [2024-07-25 07:16:57.978003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.499 [2024-07-25 07:16:57.978053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.499 [2024-07-25 07:16:57.978056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.757 I/O targets: 00:11:25.757 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:25.757 00:11:25.757 00:11:25.757 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.757 http://cunit.sourceforge.net/ 00:11:25.757 00:11:25.757 00:11:25.757 Suite: bdevio tests on: Nvme1n1 00:11:26.015 Test: blockdev write read block ...passed 00:11:26.015 Test: blockdev write zeroes read block ...passed 00:11:26.015 Test: blockdev write zeroes read no split ...passed 00:11:26.015 Test: blockdev write zeroes read split ...passed 00:11:26.015 Test: blockdev write zeroes read split partial ...passed 00:11:26.015 Test: blockdev reset ...[2024-07-25 07:16:58.477155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:26.015 [2024-07-25 07:16:58.477270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2409600 (9): Bad file descriptor 00:11:26.015 [2024-07-25 07:16:58.493738] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:26.015 passed 00:11:26.015 Test: blockdev write read 8 blocks ...passed 00:11:26.015 Test: blockdev write read size > 128k ...passed 00:11:26.015 Test: blockdev write read invalid size ...passed 00:11:26.273 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:26.273 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:26.273 Test: blockdev write read max offset ...passed 00:11:26.273 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:26.273 Test: blockdev writev readv 8 blocks ...passed 00:11:26.273 Test: blockdev writev readv 30 x 1block ...passed 00:11:26.273 Test: blockdev writev readv block ...passed 00:11:26.273 Test: blockdev writev readv size > 128k ...passed 00:11:26.273 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:26.273 Test: blockdev comparev and writev ...[2024-07-25 07:16:58.751178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.751213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.751237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.751264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.751623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.751648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.751670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.751686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.752055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.752079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.752100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.752117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.752493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.752525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:26.273 [2024-07-25 07:16:58.752547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.273 [2024-07-25 07:16:58.752563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:26.273 passed 00:11:26.531 Test: blockdev nvme passthru rw ...passed 00:11:26.531 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:16:58.835595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.531 [2024-07-25 07:16:58.835624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:26.531 [2024-07-25 07:16:58.835805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.531 [2024-07-25 07:16:58.835827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:26.531 [2024-07-25 07:16:58.836004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.531 [2024-07-25 07:16:58.836026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:26.531 [2024-07-25 07:16:58.836200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.531 [2024-07-25 07:16:58.836222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:26.531 passed 00:11:26.531 Test: blockdev nvme admin passthru ...passed 00:11:26.531 Test: blockdev copy ...passed 00:11:26.531 00:11:26.531 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.531 suites 1 1 n/a 0 0 00:11:26.531 tests 23 23 23 0 0 00:11:26.531 asserts 152 152 152 0 n/a 00:11:26.531 00:11:26.531 Elapsed time = 1.233 seconds 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.789 rmmod nvme_tcp 00:11:26.789 rmmod nvme_fabrics 00:11:26.789 rmmod nvme_keyring 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2409685 ']' 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2409685 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2409685 ']' 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2409685 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409685 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409685' 00:11:26.789 killing process with pid 2409685 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2409685 00:11:26.789 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2409685 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.069 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.605 00:11:29.605 real 0m6.442s 00:11:29.605 user 0m10.857s 00:11:29.605 sys 0m2.105s 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.605 ************************************ 00:11:29.605 END TEST nvmf_bdevio 00:11:29.605 ************************************ 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:29.605 00:11:29.605 real 3m56.152s 00:11:29.605 user 10m11.087s 00:11:29.605 sys 1m9.282s 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.605 ************************************ 00:11:29.605 END TEST nvmf_target_core 00:11:29.605 ************************************ 00:11:29.605 07:17:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:29.605 07:17:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.605 07:17:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.605 07:17:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.605 ************************************ 00:11:29.605 START TEST nvmf_target_extra 00:11:29.605 ************************************ 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:29.605 * Looking for test storage... 00:11:29.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:29.605 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.606 ************************************ 00:11:29.606 START TEST nvmf_example 00:11:29.606 ************************************ 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:29.606 * Looking for test storage... 00:11:29.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.606 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.607 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.505 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:11:31.506 00:11:31.506 --- 10.0.0.2 ping statistics --- 00:11:31.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.506 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:11:31.506 00:11:31.506 --- 10.0.0.1 ping statistics --- 00:11:31.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.506 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2411956 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2411956 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2411956 ']' 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.506 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:31.506 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.440 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:32.441 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:32.441 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.635 Initializing NVMe Controllers 00:11:44.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:44.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:44.635 Initialization complete. Launching workers. 00:11:44.635 ======================================================== 00:11:44.635 Latency(us) 00:11:44.635 Device Information : IOPS MiB/s Average min max 00:11:44.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14752.30 57.63 4339.70 1000.05 15737.69 00:11:44.635 ======================================================== 00:11:44.635 Total : 14752.30 57.63 4339.70 1000.05 15737.69 00:11:44.635 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.635 rmmod nvme_tcp 00:11:44.635 rmmod nvme_fabrics 00:11:44.635 rmmod nvme_keyring 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2411956 ']' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2411956 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2411956 ']' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2411956 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411956 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411956' 00:11:44.635 killing process with pid 2411956 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2411956 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2411956 00:11:44.635 nvmf threads initialize successfully 00:11:44.635 bdev subsystem init successfully 00:11:44.635 created a nvmf target service 00:11:44.635 create targets's poll groups done 00:11:44.635 all subsystems of target started 00:11:44.635 nvmf target is running 00:11:44.635 all subsystems of target stopped 00:11:44.635 destroy targets's poll groups done 00:11:44.635 destroyed the nvmf target service 00:11:44.635 bdev subsystem finish successfully 00:11:44.635 nvmf threads destroy successfully 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.635 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:45.204 00:11:45.204 real 0m15.831s 00:11:45.204 user 0m45.007s 00:11:45.204 sys 0m3.303s 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:45.204 ************************************ 00:11:45.204 END TEST nvmf_example 00:11:45.204 ************************************ 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.204 ************************************ 00:11:45.204 START TEST nvmf_filesystem 00:11:45.204 ************************************ 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:45.204 * Looking for test storage... 00:11:45.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:45.204 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:45.205 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:45.205 #define SPDK_CONFIG_H 00:11:45.205 #define SPDK_CONFIG_APPS 1 00:11:45.205 #define SPDK_CONFIG_ARCH native 00:11:45.205 #undef SPDK_CONFIG_ASAN 00:11:45.205 #undef SPDK_CONFIG_AVAHI 00:11:45.205 #undef SPDK_CONFIG_CET 00:11:45.205 #define SPDK_CONFIG_COVERAGE 1 00:11:45.205 #define SPDK_CONFIG_CROSS_PREFIX 00:11:45.205 #undef SPDK_CONFIG_CRYPTO 00:11:45.205 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:45.205 #undef SPDK_CONFIG_CUSTOMOCF 00:11:45.205 #undef SPDK_CONFIG_DAOS 00:11:45.205 #define SPDK_CONFIG_DAOS_DIR 00:11:45.205 #define SPDK_CONFIG_DEBUG 1 00:11:45.205 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:45.205 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:45.205 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:45.205 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:45.205 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:45.205 #undef SPDK_CONFIG_DPDK_UADK 00:11:45.205 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:45.205 #define SPDK_CONFIG_EXAMPLES 1 00:11:45.205 #undef SPDK_CONFIG_FC 00:11:45.205 #define SPDK_CONFIG_FC_PATH 00:11:45.205 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:45.205 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:45.205 #undef SPDK_CONFIG_FUSE 00:11:45.205 #undef SPDK_CONFIG_FUZZER 00:11:45.205 #define SPDK_CONFIG_FUZZER_LIB 00:11:45.205 #undef SPDK_CONFIG_GOLANG 00:11:45.205 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:45.205 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:45.205 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:45.205 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:45.205 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:45.205 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:45.205 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:45.205 #define SPDK_CONFIG_IDXD 1 00:11:45.205 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:45.205 #undef SPDK_CONFIG_IPSEC_MB 00:11:45.205 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:45.205 #define SPDK_CONFIG_ISAL 1 00:11:45.205 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:45.205 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:45.205 #define SPDK_CONFIG_LIBDIR 00:11:45.205 #undef SPDK_CONFIG_LTO 00:11:45.205 #define SPDK_CONFIG_MAX_LCORES 128 00:11:45.205 #define SPDK_CONFIG_NVME_CUSE 1 00:11:45.205 #undef SPDK_CONFIG_OCF 00:11:45.205 #define SPDK_CONFIG_OCF_PATH 00:11:45.205 #define SPDK_CONFIG_OPENSSL_PATH 00:11:45.205 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:45.205 #define SPDK_CONFIG_PGO_DIR 00:11:45.205 #undef SPDK_CONFIG_PGO_USE 00:11:45.205 #define SPDK_CONFIG_PREFIX /usr/local 00:11:45.205 #undef SPDK_CONFIG_RAID5F 00:11:45.205 #undef SPDK_CONFIG_RBD 00:11:45.205 #define SPDK_CONFIG_RDMA 1 00:11:45.205 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:45.206 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:45.206 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:45.206 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:45.206 #define SPDK_CONFIG_SHARED 1 00:11:45.206 #undef SPDK_CONFIG_SMA 00:11:45.206 #define SPDK_CONFIG_TESTS 1 00:11:45.206 #undef SPDK_CONFIG_TSAN 00:11:45.206 #define SPDK_CONFIG_UBLK 1 00:11:45.206 #define SPDK_CONFIG_UBSAN 1 00:11:45.206 #undef SPDK_CONFIG_UNIT_TESTS 00:11:45.206 #undef SPDK_CONFIG_URING 00:11:45.206 #define SPDK_CONFIG_URING_PATH 00:11:45.206 #undef SPDK_CONFIG_URING_ZNS 00:11:45.206 #undef SPDK_CONFIG_USDT 00:11:45.206 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:45.206 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:45.206 #define SPDK_CONFIG_VFIO_USER 1 00:11:45.206 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:45.206 #define SPDK_CONFIG_VHOST 1 00:11:45.206 #define SPDK_CONFIG_VIRTIO 1 00:11:45.206 #undef SPDK_CONFIG_VTUNE 00:11:45.206 #define SPDK_CONFIG_VTUNE_DIR 00:11:45.206 #define SPDK_CONFIG_WERROR 1 00:11:45.206 #define SPDK_CONFIG_WPDK_DIR 00:11:45.206 #undef SPDK_CONFIG_XNVME 00:11:45.206 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:45.206 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:45.207 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2413649 ]] 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2413649 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.FZuYzV 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FZuYzV/tests/target /tmp/spdk.FZuYzV 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55504850944 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994729472 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6489878528 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935183360 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376539136 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398948352 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996361216 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1003520 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199468032 00:11:45.208 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199472128 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:45.209 * Looking for test storage... 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55504850944 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8704471040 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.209 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.468 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.369 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.370 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:47.370 00:11:47.370 --- 10.0.0.2 ping statistics --- 00:11:47.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.370 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:11:47.370 00:11:47.370 --- 10.0.0.1 ping statistics --- 00:11:47.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.370 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.370 ************************************ 00:11:47.370 START TEST nvmf_filesystem_no_in_capsule 00:11:47.370 ************************************ 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.370 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2415272 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2415272 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2415272 ']' 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.371 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.629 [2024-07-25 07:17:19.907034] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:47.629 [2024-07-25 07:17:19.907109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.629 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.629 [2024-07-25 07:17:19.971848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.629 [2024-07-25 07:17:20.090530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.629 [2024-07-25 07:17:20.090595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.629 [2024-07-25 07:17:20.090623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.629 [2024-07-25 07:17:20.090635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.629 [2024-07-25 07:17:20.090645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.629 [2024-07-25 07:17:20.090697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.629 [2024-07-25 07:17:20.090759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.629 [2024-07-25 07:17:20.090825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.629 [2024-07-25 07:17:20.090828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.887 [2024-07-25 07:17:20.241703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.887 Malloc1 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.887 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.887 [2024-07-25 07:17:20.415148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:48.145 { 00:11:48.145 "name": "Malloc1", 00:11:48.145 "aliases": [ 00:11:48.145 "4e235876-caf6-4e2d-8660-d58145844413" 00:11:48.145 ], 00:11:48.145 "product_name": "Malloc disk", 00:11:48.145 "block_size": 512, 00:11:48.145 "num_blocks": 1048576, 00:11:48.145 "uuid": "4e235876-caf6-4e2d-8660-d58145844413", 00:11:48.145 "assigned_rate_limits": { 00:11:48.145 "rw_ios_per_sec": 0, 00:11:48.145 "rw_mbytes_per_sec": 0, 00:11:48.145 "r_mbytes_per_sec": 0, 00:11:48.145 "w_mbytes_per_sec": 0 00:11:48.145 }, 00:11:48.145 "claimed": true, 00:11:48.145 "claim_type": "exclusive_write", 00:11:48.145 "zoned": false, 00:11:48.145 "supported_io_types": { 00:11:48.145 "read": true, 00:11:48.145 "write": true, 00:11:48.145 "unmap": true, 00:11:48.145 "flush": true, 00:11:48.145 "reset": true, 00:11:48.145 "nvme_admin": false, 00:11:48.145 "nvme_io": false, 00:11:48.145 "nvme_io_md": false, 00:11:48.145 "write_zeroes": true, 00:11:48.145 "zcopy": true, 00:11:48.145 "get_zone_info": false, 00:11:48.145 "zone_management": false, 00:11:48.145 "zone_append": false, 00:11:48.145 "compare": false, 00:11:48.145 "compare_and_write": false, 00:11:48.145 "abort": true, 00:11:48.145 "seek_hole": false, 00:11:48.145 "seek_data": false, 00:11:48.145 "copy": true, 00:11:48.145 "nvme_iov_md": false 00:11:48.145 }, 00:11:48.145 "memory_domains": [ 00:11:48.145 { 00:11:48.145 "dma_device_id": "system", 00:11:48.145 "dma_device_type": 1 00:11:48.145 }, 00:11:48.145 { 00:11:48.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.145 "dma_device_type": 2 00:11:48.145 } 00:11:48.145 ], 00:11:48.145 "driver_specific": {} 00:11:48.145 } 00:11:48.145 ]' 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:48.145 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.711 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.711 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.711 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.711 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.711 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:51.239 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:51.803 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.739 ************************************ 00:11:52.739 START TEST filesystem_ext4 00:11:52.739 ************************************ 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:52.739 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:52.739 mke2fs 1.46.5 (30-Dec-2021) 00:11:53.053 Discarding device blocks: 0/522240 done 00:11:53.053 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:53.053 Filesystem UUID: f5b5e508-7016-4f53-a03a-e1bf7213b991 00:11:53.053 Superblock backups stored on blocks: 00:11:53.053 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:53.054 00:11:53.054 Allocating group tables: 0/64 done 00:11:53.054 Writing inode tables: 0/64 done 00:11:53.054 Creating journal (8192 blocks): done 00:11:53.054 Writing superblocks and filesystem accounting information: 0/64 done 00:11:53.054 00:11:53.054 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:53.054 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2415272 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.985 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.243 00:11:54.243 real 0m1.341s 00:11:54.243 user 0m0.019s 00:11:54.243 sys 0m0.057s 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:54.243 ************************************ 00:11:54.243 END TEST filesystem_ext4 00:11:54.243 ************************************ 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.243 ************************************ 00:11:54.243 START TEST filesystem_btrfs 00:11:54.243 ************************************ 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.243 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:54.244 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:54.501 btrfs-progs v6.6.2 00:11:54.501 See https://btrfs.readthedocs.io for more information. 00:11:54.501 00:11:54.501 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:54.501 NOTE: several default settings have changed in version 5.15, please make sure 00:11:54.501 this does not affect your deployments: 00:11:54.501 - DUP for metadata (-m dup) 00:11:54.501 - enabled no-holes (-O no-holes) 00:11:54.501 - enabled free-space-tree (-R free-space-tree) 00:11:54.501 00:11:54.501 Label: (null) 00:11:54.501 UUID: 3600e3c4-f63c-4136-ad9f-0f4ca26cf3a4 00:11:54.501 Node size: 16384 00:11:54.501 Sector size: 4096 00:11:54.501 Filesystem size: 510.00MiB 00:11:54.501 Block group profiles: 00:11:54.501 Data: single 8.00MiB 00:11:54.501 Metadata: DUP 32.00MiB 00:11:54.501 System: DUP 8.00MiB 00:11:54.501 SSD detected: yes 00:11:54.501 Zoned device: no 00:11:54.501 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:54.501 Runtime features: free-space-tree 00:11:54.501 Checksum: crc32c 00:11:54.501 Number of devices: 1 00:11:54.501 Devices: 00:11:54.501 ID SIZE PATH 00:11:54.501 1 510.00MiB /dev/nvme0n1p1 00:11:54.501 00:11:54.501 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:54.501 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.758 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.758 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:54.758 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.758 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:54.758 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:54.758 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2415272 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.015 00:11:55.015 real 0m0.738s 00:11:55.015 user 0m0.022s 00:11:55.015 sys 0m0.130s 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:55.015 ************************************ 00:11:55.015 END TEST filesystem_btrfs 00:11:55.015 ************************************ 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.015 ************************************ 00:11:55.015 START TEST filesystem_xfs 00:11:55.015 ************************************ 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:55.015 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:55.015 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:55.015 = sectsz=512 attr=2, projid32bit=1 00:11:55.015 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:55.015 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:55.015 data = bsize=4096 blocks=130560, imaxpct=25 00:11:55.015 = sunit=0 swidth=0 blks 00:11:55.015 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:55.015 log =internal log bsize=4096 blocks=16384, version=2 00:11:55.015 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:55.015 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:55.945 Discarding blocks...Done. 00:11:55.945 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:55.945 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2415272 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.842 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.099 00:11:58.099 real 0m3.014s 00:11:58.099 user 0m0.015s 00:11:58.099 sys 0m0.065s 00:11:58.099 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.099 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.099 ************************************ 00:11:58.099 END TEST filesystem_xfs 00:11:58.099 ************************************ 00:11:58.099 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:58.356 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:58.356 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.356 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2415272 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2415272 ']' 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2415272 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2415272 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2415272' 00:11:58.357 killing process with pid 2415272 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2415272 00:11:58.357 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2415272 00:11:58.922 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.922 00:11:58.922 real 0m11.487s 00:11:58.922 user 0m43.747s 00:11:58.922 sys 0m1.868s 00:11:58.922 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.922 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.922 ************************************ 00:11:58.923 END TEST nvmf_filesystem_no_in_capsule 00:11:58.923 ************************************ 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.923 ************************************ 00:11:58.923 START TEST nvmf_filesystem_in_capsule 00:11:58.923 ************************************ 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2416828 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2416828 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2416828 ']' 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.923 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.923 [2024-07-25 07:17:31.447485] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:11:58.923 [2024-07-25 07:17:31.447595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.181 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.181 [2024-07-25 07:17:31.509726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.181 [2024-07-25 07:17:31.624954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.181 [2024-07-25 07:17:31.625019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.181 [2024-07-25 07:17:31.625036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.181 [2024-07-25 07:17:31.625049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.181 [2024-07-25 07:17:31.625060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.181 [2024-07-25 07:17:31.625141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.181 [2024-07-25 07:17:31.625199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.181 [2024-07-25 07:17:31.625322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.181 [2024-07-25 07:17:31.625325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.439 [2024-07-25 07:17:31.794773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.439 Malloc1 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.439 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.697 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.698 [2024-07-25 07:17:31.984791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.698 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:59.698 { 00:11:59.698 "name": "Malloc1", 00:11:59.698 "aliases": [ 00:11:59.698 "758f20bb-d71b-40a2-976c-dfe29673244b" 00:11:59.698 ], 00:11:59.698 "product_name": "Malloc disk", 00:11:59.698 "block_size": 512, 00:11:59.698 "num_blocks": 1048576, 00:11:59.698 "uuid": "758f20bb-d71b-40a2-976c-dfe29673244b", 00:11:59.698 "assigned_rate_limits": { 00:11:59.698 "rw_ios_per_sec": 0, 00:11:59.698 "rw_mbytes_per_sec": 0, 00:11:59.698 "r_mbytes_per_sec": 0, 00:11:59.698 "w_mbytes_per_sec": 0 00:11:59.698 }, 00:11:59.698 "claimed": true, 00:11:59.698 "claim_type": "exclusive_write", 00:11:59.698 "zoned": false, 00:11:59.698 "supported_io_types": { 00:11:59.698 "read": true, 00:11:59.698 "write": true, 00:11:59.698 "unmap": true, 00:11:59.698 "flush": true, 00:11:59.698 "reset": true, 00:11:59.698 "nvme_admin": false, 00:11:59.698 "nvme_io": false, 00:11:59.698 "nvme_io_md": false, 00:11:59.698 "write_zeroes": true, 00:11:59.698 "zcopy": true, 00:11:59.698 "get_zone_info": false, 00:11:59.698 "zone_management": false, 00:11:59.698 "zone_append": false, 00:11:59.698 "compare": false, 00:11:59.698 "compare_and_write": false, 00:11:59.698 "abort": true, 00:11:59.698 "seek_hole": false, 00:11:59.698 "seek_data": false, 00:11:59.698 "copy": true, 00:11:59.698 "nvme_iov_md": false 00:11:59.698 }, 00:11:59.698 "memory_domains": [ 00:11:59.698 { 00:11:59.698 "dma_device_id": "system", 00:11:59.698 "dma_device_type": 1 00:11:59.698 }, 00:11:59.698 { 00:11:59.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.698 "dma_device_type": 2 00:11:59.698 } 00:11:59.698 ], 00:11:59.698 "driver_specific": {} 00:11:59.698 } 00:11:59.698 ]' 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:59.698 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.263 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.263 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.263 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.263 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:00.263 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.159 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.159 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.159 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:02.416 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:02.674 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.046 ************************************ 00:12:04.046 START TEST filesystem_in_capsule_ext4 00:12:04.046 ************************************ 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:04.046 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:04.046 mke2fs 1.46.5 (30-Dec-2021) 00:12:04.046 Discarding device blocks: 0/522240 done 00:12:04.046 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:04.046 Filesystem UUID: b3369760-591c-4b3d-acad-888914301e0c 00:12:04.046 Superblock backups stored on blocks: 00:12:04.046 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:04.046 00:12:04.046 Allocating group tables: 0/64 done 00:12:04.046 Writing inode tables: 0/64 done 00:12:04.610 Creating journal (8192 blocks): done 00:12:04.610 Writing superblocks and filesystem accounting information: 0/64 done 00:12:04.610 00:12:04.610 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:04.610 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2416828 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.868 00:12:04.868 real 0m1.154s 00:12:04.868 user 0m0.012s 00:12:04.868 sys 0m0.062s 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.868 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:04.868 ************************************ 00:12:04.868 END TEST filesystem_in_capsule_ext4 00:12:04.868 ************************************ 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.126 ************************************ 00:12:05.126 START TEST filesystem_in_capsule_btrfs 00:12:05.126 ************************************ 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:05.126 btrfs-progs v6.6.2 00:12:05.126 See https://btrfs.readthedocs.io for more information. 00:12:05.126 00:12:05.126 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:05.126 NOTE: several default settings have changed in version 5.15, please make sure 00:12:05.126 this does not affect your deployments: 00:12:05.126 - DUP for metadata (-m dup) 00:12:05.126 - enabled no-holes (-O no-holes) 00:12:05.126 - enabled free-space-tree (-R free-space-tree) 00:12:05.126 00:12:05.126 Label: (null) 00:12:05.126 UUID: 3b076387-ddb5-4771-a465-ab3f98067155 00:12:05.126 Node size: 16384 00:12:05.126 Sector size: 4096 00:12:05.126 Filesystem size: 510.00MiB 00:12:05.126 Block group profiles: 00:12:05.126 Data: single 8.00MiB 00:12:05.126 Metadata: DUP 32.00MiB 00:12:05.126 System: DUP 8.00MiB 00:12:05.126 SSD detected: yes 00:12:05.126 Zoned device: no 00:12:05.126 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:05.126 Runtime features: free-space-tree 00:12:05.126 Checksum: crc32c 00:12:05.126 Number of devices: 1 00:12:05.126 Devices: 00:12:05.126 ID SIZE PATH 00:12:05.126 1 510.00MiB /dev/nvme0n1p1 00:12:05.126 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:05.126 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.693 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.693 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:05.693 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.693 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:05.693 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:05.693 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2416828 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.951 00:12:05.951 real 0m0.828s 00:12:05.951 user 0m0.031s 00:12:05.951 sys 0m0.097s 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.951 ************************************ 00:12:05.951 END TEST filesystem_in_capsule_btrfs 00:12:05.951 ************************************ 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.951 ************************************ 00:12:05.951 START TEST filesystem_in_capsule_xfs 00:12:05.951 ************************************ 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:05.951 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.952 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:05.952 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:05.952 = sectsz=512 attr=2, projid32bit=1 00:12:05.952 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:05.952 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:05.952 data = bsize=4096 blocks=130560, imaxpct=25 00:12:05.952 = sunit=0 swidth=0 blks 00:12:05.952 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:05.952 log =internal log bsize=4096 blocks=16384, version=2 00:12:05.952 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:05.952 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:06.884 Discarding blocks...Done. 00:12:06.884 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:06.884 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2416828 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.410 00:12:09.410 real 0m3.107s 00:12:09.410 user 0m0.016s 00:12:09.410 sys 0m0.062s 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.410 ************************************ 00:12:09.410 END TEST filesystem_in_capsule_xfs 00:12:09.410 ************************************ 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2416828 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2416828 ']' 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2416828 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2416828 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2416828' 00:12:09.410 killing process with pid 2416828 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2416828 00:12:09.410 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2416828 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:09.667 00:12:09.667 real 0m10.684s 00:12:09.667 user 0m40.586s 00:12:09.667 sys 0m1.827s 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.667 ************************************ 00:12:09.667 END TEST nvmf_filesystem_in_capsule 00:12:09.667 ************************************ 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:09.667 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.668 rmmod nvme_tcp 00:12:09.668 rmmod nvme_fabrics 00:12:09.668 rmmod nvme_keyring 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.668 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.276 00:12:12.276 real 0m26.595s 00:12:12.276 user 1m25.201s 00:12:12.276 sys 0m5.261s 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.276 ************************************ 00:12:12.276 END TEST nvmf_filesystem 00:12:12.276 ************************************ 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.276 ************************************ 00:12:12.276 START TEST nvmf_target_discovery 00:12:12.276 ************************************ 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.276 * Looking for test storage... 00:12:12.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.276 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.277 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:14.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:14.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.177 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:14.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:14.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:12:14.178 00:12:14.178 --- 10.0.0.2 ping statistics --- 00:12:14.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.178 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:12:14.178 00:12:14.178 --- 10.0.0.1 ping statistics --- 00:12:14.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.178 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2420167 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2420167 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2420167 ']' 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.178 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.178 [2024-07-25 07:17:46.493931] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:14.178 [2024-07-25 07:17:46.494002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.178 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.178 [2024-07-25 07:17:46.558032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.178 [2024-07-25 07:17:46.670526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.178 [2024-07-25 07:17:46.670586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.178 [2024-07-25 07:17:46.670615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.178 [2024-07-25 07:17:46.670626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.178 [2024-07-25 07:17:46.670635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.178 [2024-07-25 07:17:46.670718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.178 [2024-07-25 07:17:46.670784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.178 [2024-07-25 07:17:46.670852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.178 [2024-07-25 07:17:46.670850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 [2024-07-25 07:17:46.828862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 Null1 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 [2024-07-25 07:17:46.869158] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 Null2 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 Null3 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:14.437 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.438 Null4 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.438 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.696 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:14.696 00:12:14.696 Discovery Log Number of Records 6, Generation counter 6 00:12:14.696 =====Discovery Log Entry 0====== 00:12:14.696 trtype: tcp 00:12:14.696 adrfam: ipv4 00:12:14.696 subtype: current discovery subsystem 00:12:14.696 treq: not required 00:12:14.696 portid: 0 00:12:14.696 trsvcid: 4420 00:12:14.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:14.696 traddr: 10.0.0.2 00:12:14.696 eflags: explicit discovery connections, duplicate discovery information 00:12:14.696 sectype: none 00:12:14.696 =====Discovery Log Entry 1====== 00:12:14.696 trtype: tcp 00:12:14.696 adrfam: ipv4 00:12:14.696 subtype: nvme subsystem 00:12:14.696 treq: not required 00:12:14.696 portid: 0 00:12:14.696 trsvcid: 4420 00:12:14.696 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:14.696 traddr: 10.0.0.2 00:12:14.696 eflags: none 00:12:14.696 sectype: none 00:12:14.696 =====Discovery Log Entry 2====== 00:12:14.696 trtype: tcp 00:12:14.696 adrfam: ipv4 00:12:14.696 subtype: nvme subsystem 00:12:14.696 treq: not required 00:12:14.696 portid: 0 00:12:14.696 trsvcid: 4420 00:12:14.696 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:14.696 traddr: 10.0.0.2 00:12:14.696 eflags: none 00:12:14.696 sectype: none 00:12:14.696 =====Discovery Log Entry 3====== 00:12:14.696 trtype: tcp 00:12:14.696 adrfam: ipv4 00:12:14.696 subtype: nvme subsystem 00:12:14.696 treq: not required 00:12:14.696 portid: 0 00:12:14.696 trsvcid: 4420 00:12:14.696 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:14.696 traddr: 10.0.0.2 00:12:14.696 eflags: none 00:12:14.696 sectype: none 00:12:14.696 =====Discovery Log Entry 4====== 00:12:14.696 trtype: tcp 00:12:14.696 adrfam: ipv4 00:12:14.696 subtype: nvme subsystem 00:12:14.696 treq: not required 00:12:14.696 portid: 0 00:12:14.696 trsvcid: 4420 00:12:14.696 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:14.696 traddr: 10.0.0.2 00:12:14.696 eflags: none 00:12:14.696 sectype: none 00:12:14.696 =====Discovery Log Entry 5====== 00:12:14.696 trtype: tcp 00:12:14.696 adrfam: ipv4 00:12:14.696 subtype: discovery subsystem referral 00:12:14.696 treq: not required 00:12:14.696 portid: 0 00:12:14.696 trsvcid: 4430 00:12:14.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:14.696 traddr: 10.0.0.2 00:12:14.696 eflags: none 00:12:14.696 sectype: none 00:12:14.696 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:14.696 Perform nvmf subsystem discovery via RPC 00:12:14.696 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:14.696 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.696 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.696 [ 00:12:14.696 { 00:12:14.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:14.696 "subtype": "Discovery", 00:12:14.696 "listen_addresses": [ 00:12:14.696 { 00:12:14.696 "trtype": "TCP", 00:12:14.696 "adrfam": "IPv4", 00:12:14.696 "traddr": "10.0.0.2", 00:12:14.696 "trsvcid": "4420" 00:12:14.696 } 00:12:14.696 ], 00:12:14.696 "allow_any_host": true, 00:12:14.696 "hosts": [] 00:12:14.696 }, 00:12:14.696 { 00:12:14.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.696 "subtype": "NVMe", 00:12:14.696 "listen_addresses": [ 00:12:14.696 { 00:12:14.696 "trtype": "TCP", 00:12:14.696 "adrfam": "IPv4", 00:12:14.696 "traddr": "10.0.0.2", 00:12:14.696 "trsvcid": "4420" 00:12:14.696 } 00:12:14.696 ], 00:12:14.696 "allow_any_host": true, 00:12:14.696 "hosts": [], 00:12:14.696 "serial_number": "SPDK00000000000001", 00:12:14.696 "model_number": "SPDK bdev Controller", 00:12:14.696 "max_namespaces": 32, 00:12:14.696 "min_cntlid": 1, 00:12:14.696 "max_cntlid": 65519, 00:12:14.696 "namespaces": [ 00:12:14.696 { 00:12:14.696 "nsid": 1, 00:12:14.696 "bdev_name": "Null1", 00:12:14.696 "name": "Null1", 00:12:14.696 "nguid": "0E42B313DCB54EC2833DD141D6B997DC", 00:12:14.696 "uuid": "0e42b313-dcb5-4ec2-833d-d141d6b997dc" 00:12:14.696 } 00:12:14.696 ] 00:12:14.696 }, 00:12:14.696 { 00:12:14.696 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:14.696 "subtype": "NVMe", 00:12:14.696 "listen_addresses": [ 00:12:14.696 { 00:12:14.696 "trtype": "TCP", 00:12:14.696 "adrfam": "IPv4", 00:12:14.697 "traddr": "10.0.0.2", 00:12:14.697 "trsvcid": "4420" 00:12:14.697 } 00:12:14.697 ], 00:12:14.697 "allow_any_host": true, 00:12:14.697 "hosts": [], 00:12:14.697 "serial_number": "SPDK00000000000002", 00:12:14.697 "model_number": "SPDK bdev Controller", 00:12:14.697 "max_namespaces": 32, 00:12:14.697 "min_cntlid": 1, 00:12:14.697 "max_cntlid": 65519, 00:12:14.697 "namespaces": [ 00:12:14.697 { 00:12:14.697 "nsid": 1, 00:12:14.697 "bdev_name": "Null2", 00:12:14.697 "name": "Null2", 00:12:14.697 "nguid": "0063CFBC751049E5B80CE7A40B631730", 00:12:14.697 "uuid": "0063cfbc-7510-49e5-b80c-e7a40b631730" 00:12:14.697 } 00:12:14.697 ] 00:12:14.697 }, 00:12:14.697 { 00:12:14.697 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:14.697 "subtype": "NVMe", 00:12:14.697 "listen_addresses": [ 00:12:14.697 { 00:12:14.697 "trtype": "TCP", 00:12:14.697 "adrfam": "IPv4", 00:12:14.697 "traddr": "10.0.0.2", 00:12:14.697 "trsvcid": "4420" 00:12:14.697 } 00:12:14.697 ], 00:12:14.697 "allow_any_host": true, 00:12:14.697 "hosts": [], 00:12:14.697 "serial_number": "SPDK00000000000003", 00:12:14.697 "model_number": "SPDK bdev Controller", 00:12:14.697 "max_namespaces": 32, 00:12:14.697 "min_cntlid": 1, 00:12:14.697 "max_cntlid": 65519, 00:12:14.697 "namespaces": [ 00:12:14.697 { 00:12:14.697 "nsid": 1, 00:12:14.697 "bdev_name": "Null3", 00:12:14.697 "name": "Null3", 00:12:14.697 "nguid": "973BD74D979049BC806419B96D25B84C", 00:12:14.697 "uuid": "973bd74d-9790-49bc-8064-19b96d25b84c" 00:12:14.697 } 00:12:14.697 ] 00:12:14.697 }, 00:12:14.697 { 00:12:14.697 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:14.697 "subtype": "NVMe", 00:12:14.697 "listen_addresses": [ 00:12:14.697 { 00:12:14.697 "trtype": "TCP", 00:12:14.697 "adrfam": "IPv4", 00:12:14.697 "traddr": "10.0.0.2", 00:12:14.697 "trsvcid": "4420" 00:12:14.697 } 00:12:14.697 ], 00:12:14.697 "allow_any_host": true, 00:12:14.697 "hosts": [], 00:12:14.697 "serial_number": "SPDK00000000000004", 00:12:14.697 "model_number": "SPDK bdev Controller", 00:12:14.697 "max_namespaces": 32, 00:12:14.697 "min_cntlid": 1, 00:12:14.697 "max_cntlid": 65519, 00:12:14.697 "namespaces": [ 00:12:14.697 { 00:12:14.697 "nsid": 1, 00:12:14.697 "bdev_name": "Null4", 00:12:14.697 "name": "Null4", 00:12:14.697 "nguid": "2DE48CEB1A774AA39DFFC3A3B04EEFD4", 00:12:14.697 "uuid": "2de48ceb-1a77-4aa3-9dff-c3a3b04eefd4" 00:12:14.697 } 00:12:14.697 ] 00:12:14.697 } 00:12:14.697 ] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.697 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.956 rmmod nvme_tcp 00:12:14.956 rmmod nvme_fabrics 00:12:14.956 rmmod nvme_keyring 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2420167 ']' 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2420167 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2420167 ']' 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2420167 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2420167 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2420167' 00:12:14.956 killing process with pid 2420167 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2420167 00:12:14.956 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2420167 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.214 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.115 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:17.115 00:12:17.115 real 0m5.387s 00:12:17.115 user 0m4.312s 00:12:17.115 sys 0m1.811s 00:12:17.115 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.115 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.115 ************************************ 00:12:17.115 END TEST nvmf_target_discovery 00:12:17.115 ************************************ 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.373 ************************************ 00:12:17.373 START TEST nvmf_referrals 00:12:17.373 ************************************ 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:17.373 * Looking for test storage... 00:12:17.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.373 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.374 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:19.277 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:19.277 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:19.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:19.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:19.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:12:19.537 00:12:19.537 --- 10.0.0.2 ping statistics --- 00:12:19.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.537 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:12:19.537 00:12:19.537 --- 10.0.0.1 ping statistics --- 00:12:19.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.537 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2422251 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2422251 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2422251 ']' 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.537 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:19.538 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.538 [2024-07-25 07:17:52.041662] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:19.538 [2024-07-25 07:17:52.041759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.796 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.796 [2024-07-25 07:17:52.121761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.796 [2024-07-25 07:17:52.246752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.796 [2024-07-25 07:17:52.246811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.796 [2024-07-25 07:17:52.246828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.796 [2024-07-25 07:17:52.246843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.796 [2024-07-25 07:17:52.246856] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.796 [2024-07-25 07:17:52.246942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.796 [2024-07-25 07:17:52.246970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.796 [2024-07-25 07:17:52.247038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.796 [2024-07-25 07:17:52.247042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 [2024-07-25 07:17:52.411779] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 [2024-07-25 07:17:52.424036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.055 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.313 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.571 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.572 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.830 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.086 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.344 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.602 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.602 rmmod nvme_tcp 00:12:21.602 rmmod nvme_fabrics 00:12:21.602 rmmod nvme_keyring 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2422251 ']' 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2422251 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2422251 ']' 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2422251 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2422251 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2422251' 00:12:21.602 killing process with pid 2422251 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2422251 00:12:21.602 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2422251 00:12:21.861 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.862 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.397 00:12:24.397 real 0m6.669s 00:12:24.397 user 0m9.518s 00:12:24.397 sys 0m2.159s 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.397 ************************************ 00:12:24.397 END TEST nvmf_referrals 00:12:24.397 ************************************ 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.397 ************************************ 00:12:24.397 START TEST nvmf_connect_disconnect 00:12:24.397 ************************************ 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:24.397 * Looking for test storage... 00:12:24.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.397 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.398 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.298 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:12:26.299 00:12:26.299 --- 10.0.0.2 ping statistics --- 00:12:26.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.299 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:26.299 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:26.299 00:12:26.299 --- 10.0.0.1 ping statistics --- 00:12:26.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.300 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2424542 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2424542 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2424542 ']' 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.300 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.300 [2024-07-25 07:17:58.655362] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:26.300 [2024-07-25 07:17:58.655460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.300 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.300 [2024-07-25 07:17:58.734669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.558 [2024-07-25 07:17:58.862234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.558 [2024-07-25 07:17:58.862307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.558 [2024-07-25 07:17:58.862326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.558 [2024-07-25 07:17:58.862340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.558 [2024-07-25 07:17:58.862352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.558 [2024-07-25 07:17:58.862410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.558 [2024-07-25 07:17:58.862440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.558 [2024-07-25 07:17:58.862493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.558 [2024-07-25 07:17:58.862498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.558 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.558 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:26.558 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.558 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:26.558 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.558 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.558 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:26.558 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.558 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.558 [2024-07-25 07:17:59.017784] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.558 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.558 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.559 [2024-07-25 07:17:59.068994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:26.559 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:29.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.744 rmmod nvme_tcp 00:12:40.744 rmmod nvme_fabrics 00:12:40.744 rmmod nvme_keyring 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2424542 ']' 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2424542 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2424542 ']' 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2424542 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2424542 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2424542' 00:12:40.744 killing process with pid 2424542 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2424542 00:12:40.744 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2424542 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.744 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.278 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.278 00:12:43.278 real 0m18.792s 00:12:43.278 user 0m56.413s 00:12:43.278 sys 0m3.363s 00:12:43.278 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.278 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.278 ************************************ 00:12:43.278 END TEST nvmf_connect_disconnect 00:12:43.278 ************************************ 00:12:43.278 07:18:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.279 ************************************ 00:12:43.279 START TEST nvmf_multitarget 00:12:43.279 ************************************ 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:43.279 * Looking for test storage... 00:12:43.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.279 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.183 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:45.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:12:45.184 00:12:45.184 --- 10.0.0.2 ping statistics --- 00:12:45.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.184 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:12:45.184 00:12:45.184 --- 10.0.0.1 ping statistics --- 00:12:45.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.184 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2428794 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2428794 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2428794 ']' 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.184 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.184 [2024-07-25 07:18:17.486502] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:45.184 [2024-07-25 07:18:17.486586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.184 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.184 [2024-07-25 07:18:17.555186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.184 [2024-07-25 07:18:17.679526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.184 [2024-07-25 07:18:17.679575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.184 [2024-07-25 07:18:17.679590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.184 [2024-07-25 07:18:17.679603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.184 [2024-07-25 07:18:17.679615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.184 [2024-07-25 07:18:17.679716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.184 [2024-07-25 07:18:17.679768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.184 [2024-07-25 07:18:17.679838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.184 [2024-07-25 07:18:17.679841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.116 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.116 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:46.116 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:46.117 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:46.374 "nvmf_tgt_1" 00:12:46.374 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:46.374 "nvmf_tgt_2" 00:12:46.374 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.374 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:46.631 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:46.631 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:46.631 true 00:12:46.631 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:46.889 true 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.889 rmmod nvme_tcp 00:12:46.889 rmmod nvme_fabrics 00:12:46.889 rmmod nvme_keyring 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2428794 ']' 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2428794 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2428794 ']' 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2428794 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2428794 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2428794' 00:12:46.889 killing process with pid 2428794 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2428794 00:12:46.889 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2428794 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.454 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.390 00:12:49.390 real 0m6.507s 00:12:49.390 user 0m9.636s 00:12:49.390 sys 0m1.948s 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.390 ************************************ 00:12:49.390 END TEST nvmf_multitarget 00:12:49.390 ************************************ 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.390 ************************************ 00:12:49.390 START TEST nvmf_rpc 00:12:49.390 ************************************ 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:49.390 * Looking for test storage... 00:12:49.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.390 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.919 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.919 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:51.919 00:12:51.919 --- 10.0.0.2 ping statistics --- 00:12:51.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.919 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:51.919 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:51.919 00:12:51.919 --- 10.0.0.1 ping statistics --- 00:12:51.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.919 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:51.919 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.919 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:51.919 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2431026 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2431026 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2431026 ']' 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.920 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.920 [2024-07-25 07:18:24.094588] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:12:51.920 [2024-07-25 07:18:24.094673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.920 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.920 [2024-07-25 07:18:24.170070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.920 [2024-07-25 07:18:24.291202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.920 [2024-07-25 07:18:24.291270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.920 [2024-07-25 07:18:24.291288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.920 [2024-07-25 07:18:24.291310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.920 [2024-07-25 07:18:24.291323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.920 [2024-07-25 07:18:24.291397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.920 [2024-07-25 07:18:24.291453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.920 [2024-07-25 07:18:24.291505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.920 [2024-07-25 07:18:24.291509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:52.858 "tick_rate": 2700000000, 00:12:52.858 "poll_groups": [ 00:12:52.858 { 00:12:52.858 "name": "nvmf_tgt_poll_group_000", 00:12:52.858 "admin_qpairs": 0, 00:12:52.858 "io_qpairs": 0, 00:12:52.858 "current_admin_qpairs": 0, 00:12:52.858 "current_io_qpairs": 0, 00:12:52.858 "pending_bdev_io": 0, 00:12:52.858 "completed_nvme_io": 0, 00:12:52.858 "transports": [] 00:12:52.858 }, 00:12:52.858 { 00:12:52.858 "name": "nvmf_tgt_poll_group_001", 00:12:52.858 "admin_qpairs": 0, 00:12:52.858 "io_qpairs": 0, 00:12:52.858 "current_admin_qpairs": 0, 00:12:52.858 "current_io_qpairs": 0, 00:12:52.858 "pending_bdev_io": 0, 00:12:52.858 "completed_nvme_io": 0, 00:12:52.858 "transports": [] 00:12:52.858 }, 00:12:52.858 { 00:12:52.858 "name": "nvmf_tgt_poll_group_002", 00:12:52.858 "admin_qpairs": 0, 00:12:52.858 "io_qpairs": 0, 00:12:52.858 "current_admin_qpairs": 0, 00:12:52.858 "current_io_qpairs": 0, 00:12:52.858 "pending_bdev_io": 0, 00:12:52.858 "completed_nvme_io": 0, 00:12:52.858 "transports": [] 00:12:52.858 }, 00:12:52.858 { 00:12:52.858 "name": "nvmf_tgt_poll_group_003", 00:12:52.858 "admin_qpairs": 0, 00:12:52.858 "io_qpairs": 0, 00:12:52.858 "current_admin_qpairs": 0, 00:12:52.858 "current_io_qpairs": 0, 00:12:52.858 "pending_bdev_io": 0, 00:12:52.858 "completed_nvme_io": 0, 00:12:52.858 "transports": [] 00:12:52.858 } 00:12:52.858 ] 00:12:52.858 }' 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.858 [2024-07-25 07:18:25.192313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.858 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:52.858 "tick_rate": 2700000000, 00:12:52.858 "poll_groups": [ 00:12:52.858 { 00:12:52.858 "name": "nvmf_tgt_poll_group_000", 00:12:52.859 "admin_qpairs": 0, 00:12:52.859 "io_qpairs": 0, 00:12:52.859 "current_admin_qpairs": 0, 00:12:52.859 "current_io_qpairs": 0, 00:12:52.859 "pending_bdev_io": 0, 00:12:52.859 "completed_nvme_io": 0, 00:12:52.859 "transports": [ 00:12:52.859 { 00:12:52.859 "trtype": "TCP" 00:12:52.859 } 00:12:52.859 ] 00:12:52.859 }, 00:12:52.859 { 00:12:52.859 "name": "nvmf_tgt_poll_group_001", 00:12:52.859 "admin_qpairs": 0, 00:12:52.859 "io_qpairs": 0, 00:12:52.859 "current_admin_qpairs": 0, 00:12:52.859 "current_io_qpairs": 0, 00:12:52.859 "pending_bdev_io": 0, 00:12:52.859 "completed_nvme_io": 0, 00:12:52.859 "transports": [ 00:12:52.859 { 00:12:52.859 "trtype": "TCP" 00:12:52.859 } 00:12:52.859 ] 00:12:52.859 }, 00:12:52.859 { 00:12:52.859 "name": "nvmf_tgt_poll_group_002", 00:12:52.859 "admin_qpairs": 0, 00:12:52.859 "io_qpairs": 0, 00:12:52.859 "current_admin_qpairs": 0, 00:12:52.859 "current_io_qpairs": 0, 00:12:52.859 "pending_bdev_io": 0, 00:12:52.859 "completed_nvme_io": 0, 00:12:52.859 "transports": [ 00:12:52.859 { 00:12:52.859 "trtype": "TCP" 00:12:52.859 } 00:12:52.859 ] 00:12:52.859 }, 00:12:52.859 { 00:12:52.859 "name": "nvmf_tgt_poll_group_003", 00:12:52.859 "admin_qpairs": 0, 00:12:52.859 "io_qpairs": 0, 00:12:52.859 "current_admin_qpairs": 0, 00:12:52.859 "current_io_qpairs": 0, 00:12:52.859 "pending_bdev_io": 0, 00:12:52.859 "completed_nvme_io": 0, 00:12:52.859 "transports": [ 00:12:52.859 { 00:12:52.859 "trtype": "TCP" 00:12:52.859 } 00:12:52.859 ] 00:12:52.859 } 00:12:52.859 ] 00:12:52.859 }' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.859 Malloc1 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.859 [2024-07-25 07:18:25.332079] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:52.859 [2024-07-25 07:18:25.354538] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:52.859 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.859 could not add new controller: failed to write to nvme-fabrics device 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.859 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.117 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.117 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.683 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.683 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.683 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.683 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.683 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.580 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:55.580 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.581 [2024-07-25 07:18:28.083134] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:55.581 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.581 could not add new controller: failed to write to nvme-fabrics device 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.581 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.857 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.857 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.423 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.423 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.423 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.423 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.423 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.323 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 [2024-07-25 07:18:30.862883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.582 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.148 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.148 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.148 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.148 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.148 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.049 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 [2024-07-25 07:18:33.612367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.305 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.870 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.870 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:01.870 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.870 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:01.870 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:03.767 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.025 [2024-07-25 07:18:36.377713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.590 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.590 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.590 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.590 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.590 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.141 [2024-07-25 07:18:39.180452] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.141 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.399 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.399 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.399 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.399 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.399 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.297 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 [2024-07-25 07:18:41.935700] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.555 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.121 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.121 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.121 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.121 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.121 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.019 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 [2024-07-25 07:18:44.652846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 [2024-07-25 07:18:44.700893] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.278 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 [2024-07-25 07:18:44.749052] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.279 [2024-07-25 07:18:44.797202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.279 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 [2024-07-25 07:18:44.845388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.537 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:12.537 "tick_rate": 2700000000, 00:13:12.537 "poll_groups": [ 00:13:12.537 { 00:13:12.537 "name": "nvmf_tgt_poll_group_000", 00:13:12.537 "admin_qpairs": 2, 00:13:12.537 "io_qpairs": 84, 00:13:12.537 "current_admin_qpairs": 0, 00:13:12.537 "current_io_qpairs": 0, 00:13:12.537 "pending_bdev_io": 0, 00:13:12.537 "completed_nvme_io": 135, 00:13:12.537 "transports": [ 00:13:12.537 { 00:13:12.537 "trtype": "TCP" 00:13:12.537 } 00:13:12.537 ] 00:13:12.537 }, 00:13:12.537 { 00:13:12.537 "name": "nvmf_tgt_poll_group_001", 00:13:12.537 "admin_qpairs": 2, 00:13:12.537 "io_qpairs": 84, 00:13:12.537 "current_admin_qpairs": 0, 00:13:12.537 "current_io_qpairs": 0, 00:13:12.537 "pending_bdev_io": 0, 00:13:12.537 "completed_nvme_io": 183, 00:13:12.537 "transports": [ 00:13:12.537 { 00:13:12.537 "trtype": "TCP" 00:13:12.537 } 00:13:12.537 ] 00:13:12.537 }, 00:13:12.537 { 00:13:12.537 "name": "nvmf_tgt_poll_group_002", 00:13:12.537 "admin_qpairs": 1, 00:13:12.537 "io_qpairs": 84, 00:13:12.537 "current_admin_qpairs": 0, 00:13:12.537 "current_io_qpairs": 0, 00:13:12.537 "pending_bdev_io": 0, 00:13:12.537 "completed_nvme_io": 186, 00:13:12.537 "transports": [ 00:13:12.537 { 00:13:12.537 "trtype": "TCP" 00:13:12.537 } 00:13:12.537 ] 00:13:12.537 }, 00:13:12.537 { 00:13:12.537 "name": "nvmf_tgt_poll_group_003", 00:13:12.537 "admin_qpairs": 2, 00:13:12.537 "io_qpairs": 84, 00:13:12.537 "current_admin_qpairs": 0, 00:13:12.537 "current_io_qpairs": 0, 00:13:12.537 "pending_bdev_io": 0, 00:13:12.537 "completed_nvme_io": 182, 00:13:12.537 "transports": [ 00:13:12.537 { 00:13:12.537 "trtype": "TCP" 00:13:12.537 } 00:13:12.537 ] 00:13:12.537 } 00:13:12.538 ] 00:13:12.538 }' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.538 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.538 rmmod nvme_tcp 00:13:12.538 rmmod nvme_fabrics 00:13:12.538 rmmod nvme_keyring 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2431026 ']' 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2431026 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2431026 ']' 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2431026 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.538 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2431026 00:13:12.796 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:12.796 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:12.796 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2431026' 00:13:12.796 killing process with pid 2431026 00:13:12.796 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2431026 00:13:12.796 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2431026 00:13:13.055 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.055 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.055 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.056 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.056 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.056 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.056 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.056 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.956 00:13:14.956 real 0m25.635s 00:13:14.956 user 1m23.514s 00:13:14.956 sys 0m4.227s 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.956 ************************************ 00:13:14.956 END TEST nvmf_rpc 00:13:14.956 ************************************ 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.956 ************************************ 00:13:14.956 START TEST nvmf_invalid 00:13:14.956 ************************************ 00:13:14.956 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.215 * Looking for test storage... 00:13:15.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.216 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.120 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.121 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.121 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:17.379 00:13:17.379 --- 10.0.0.2 ping statistics --- 00:13:17.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.379 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:17.379 00:13:17.379 --- 10.0.0.1 ping statistics --- 00:13:17.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.379 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2435633 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2435633 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2435633 ']' 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.379 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.379 [2024-07-25 07:18:49.772001] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:17.379 [2024-07-25 07:18:49.772104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.379 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.379 [2024-07-25 07:18:49.841899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.637 [2024-07-25 07:18:49.962867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.637 [2024-07-25 07:18:49.962924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.637 [2024-07-25 07:18:49.962948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.637 [2024-07-25 07:18:49.962962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.637 [2024-07-25 07:18:49.962974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.637 [2024-07-25 07:18:49.963064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.637 [2024-07-25 07:18:49.963118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.637 [2024-07-25 07:18:49.963167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.637 [2024-07-25 07:18:49.963171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:18.571 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19096 00:13:18.571 [2024-07-25 07:18:51.051495] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:18.571 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:18.571 { 00:13:18.571 "nqn": "nqn.2016-06.io.spdk:cnode19096", 00:13:18.571 "tgt_name": "foobar", 00:13:18.571 "method": "nvmf_create_subsystem", 00:13:18.571 "req_id": 1 00:13:18.571 } 00:13:18.571 Got JSON-RPC error response 00:13:18.571 response: 00:13:18.571 { 00:13:18.571 "code": -32603, 00:13:18.571 "message": "Unable to find target foobar" 00:13:18.571 }' 00:13:18.571 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:18.571 { 00:13:18.571 "nqn": "nqn.2016-06.io.spdk:cnode19096", 00:13:18.571 "tgt_name": "foobar", 00:13:18.571 "method": "nvmf_create_subsystem", 00:13:18.571 "req_id": 1 00:13:18.571 } 00:13:18.571 Got JSON-RPC error response 00:13:18.571 response: 00:13:18.571 { 00:13:18.571 "code": -32603, 00:13:18.571 "message": "Unable to find target foobar" 00:13:18.571 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:18.571 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:18.571 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25878 00:13:18.829 [2024-07-25 07:18:51.300355] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25878: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:18.829 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:18.829 { 00:13:18.829 "nqn": "nqn.2016-06.io.spdk:cnode25878", 00:13:18.829 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.829 "method": "nvmf_create_subsystem", 00:13:18.829 "req_id": 1 00:13:18.829 } 00:13:18.829 Got JSON-RPC error response 00:13:18.829 response: 00:13:18.829 { 00:13:18.829 "code": -32602, 00:13:18.829 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.829 }' 00:13:18.829 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:18.829 { 00:13:18.829 "nqn": "nqn.2016-06.io.spdk:cnode25878", 00:13:18.829 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.829 "method": "nvmf_create_subsystem", 00:13:18.829 "req_id": 1 00:13:18.829 } 00:13:18.829 Got JSON-RPC error response 00:13:18.829 response: 00:13:18.829 { 00:13:18.829 "code": -32602, 00:13:18.829 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.829 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:18.829 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:18.829 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15253 00:13:19.089 [2024-07-25 07:18:51.545079] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15253: invalid model number 'SPDK_Controller' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:19.089 { 00:13:19.089 "nqn": "nqn.2016-06.io.spdk:cnode15253", 00:13:19.089 "model_number": "SPDK_Controller\u001f", 00:13:19.089 "method": "nvmf_create_subsystem", 00:13:19.089 "req_id": 1 00:13:19.089 } 00:13:19.089 Got JSON-RPC error response 00:13:19.089 response: 00:13:19.089 { 00:13:19.089 "code": -32602, 00:13:19.089 "message": "Invalid MN SPDK_Controller\u001f" 00:13:19.089 }' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:19.089 { 00:13:19.089 "nqn": "nqn.2016-06.io.spdk:cnode15253", 00:13:19.089 "model_number": "SPDK_Controller\u001f", 00:13:19.089 "method": "nvmf_create_subsystem", 00:13:19.089 "req_id": 1 00:13:19.089 } 00:13:19.089 Got JSON-RPC error response 00:13:19.089 response: 00:13:19.089 { 00:13:19.089 "code": -32602, 00:13:19.089 "message": "Invalid MN SPDK_Controller\u001f" 00:13:19.089 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.089 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:19.090 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Bqg>#0+m@hHk"+X8BM|ju' 00:13:19.348 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Bqg>#0+m@hHk"+X8BM|ju' nqn.2016-06.io.spdk:cnode26183 00:13:19.348 [2024-07-25 07:18:51.858118] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26183: invalid serial number 'Bqg>#0+m@hHk"+X8BM|ju' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:19.607 { 00:13:19.607 "nqn": "nqn.2016-06.io.spdk:cnode26183", 00:13:19.607 "serial_number": "Bqg>#0+m@hHk\"+X8BM|ju", 00:13:19.607 "method": "nvmf_create_subsystem", 00:13:19.607 "req_id": 1 00:13:19.607 } 00:13:19.607 Got JSON-RPC error response 00:13:19.607 response: 00:13:19.607 { 00:13:19.607 "code": -32602, 00:13:19.607 "message": "Invalid SN Bqg>#0+m@hHk\"+X8BM|ju" 00:13:19.607 }' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:19.607 { 00:13:19.607 "nqn": "nqn.2016-06.io.spdk:cnode26183", 00:13:19.607 "serial_number": "Bqg>#0+m@hHk\"+X8BM|ju", 00:13:19.607 "method": "nvmf_create_subsystem", 00:13:19.607 "req_id": 1 00:13:19.607 } 00:13:19.607 Got JSON-RPC error response 00:13:19.607 response: 00:13:19.607 { 00:13:19.607 "code": -32602, 00:13:19.607 "message": "Invalid SN Bqg>#0+m@hHk\"+X8BM|ju" 00:13:19.607 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:19.607 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:19.608 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:19.608 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Lj-bOnS\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'\''' 00:13:19.609 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Lj-bOnS\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'\''' nqn.2016-06.io.spdk:cnode8755 00:13:19.867 [2024-07-25 07:18:52.263421] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8755: invalid model number 'Lj-bOnS\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'' 00:13:19.867 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:19.867 { 00:13:19.867 "nqn": "nqn.2016-06.io.spdk:cnode8755", 00:13:19.867 "model_number": "Lj-bOnS\\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'\''", 00:13:19.867 "method": "nvmf_create_subsystem", 00:13:19.867 "req_id": 1 00:13:19.867 } 00:13:19.867 Got JSON-RPC error response 00:13:19.867 response: 00:13:19.867 { 00:13:19.867 "code": -32602, 00:13:19.867 "message": "Invalid MN Lj-bOnS\\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'\''" 00:13:19.867 }' 00:13:19.867 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:19.867 { 00:13:19.867 "nqn": "nqn.2016-06.io.spdk:cnode8755", 00:13:19.867 "model_number": "Lj-bOnS\\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'", 00:13:19.867 "method": "nvmf_create_subsystem", 00:13:19.867 "req_id": 1 00:13:19.867 } 00:13:19.867 Got JSON-RPC error response 00:13:19.867 response: 00:13:19.867 { 00:13:19.867 "code": -32602, 00:13:19.867 "message": "Invalid MN Lj-bOnS\\Ow%4zdwfq6mBOYopbIu!*0p^;$fF/HmZ'" 00:13:19.867 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.867 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:20.125 [2024-07-25 07:18:52.500315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.125 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:20.393 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:20.393 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:20.393 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:20.393 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:20.393 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:20.651 [2024-07-25 07:18:53.014000] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:20.651 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:20.651 { 00:13:20.651 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.651 "listen_address": { 00:13:20.651 "trtype": "tcp", 00:13:20.651 "traddr": "", 00:13:20.651 "trsvcid": "4421" 00:13:20.651 }, 00:13:20.651 "method": "nvmf_subsystem_remove_listener", 00:13:20.651 "req_id": 1 00:13:20.651 } 00:13:20.651 Got JSON-RPC error response 00:13:20.651 response: 00:13:20.651 { 00:13:20.651 "code": -32602, 00:13:20.651 "message": "Invalid parameters" 00:13:20.651 }' 00:13:20.651 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:20.651 { 00:13:20.651 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.651 "listen_address": { 00:13:20.651 "trtype": "tcp", 00:13:20.651 "traddr": "", 00:13:20.651 "trsvcid": "4421" 00:13:20.651 }, 00:13:20.651 "method": "nvmf_subsystem_remove_listener", 00:13:20.651 "req_id": 1 00:13:20.651 } 00:13:20.651 Got JSON-RPC error response 00:13:20.651 response: 00:13:20.651 { 00:13:20.651 "code": -32602, 00:13:20.651 "message": "Invalid parameters" 00:13:20.651 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:20.652 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10494 -i 0 00:13:20.909 [2024-07-25 07:18:53.258756] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10494: invalid cntlid range [0-65519] 00:13:20.909 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:20.909 { 00:13:20.909 "nqn": "nqn.2016-06.io.spdk:cnode10494", 00:13:20.909 "min_cntlid": 0, 00:13:20.909 "method": "nvmf_create_subsystem", 00:13:20.909 "req_id": 1 00:13:20.909 } 00:13:20.909 Got JSON-RPC error response 00:13:20.909 response: 00:13:20.909 { 00:13:20.909 "code": -32602, 00:13:20.909 "message": "Invalid cntlid range [0-65519]" 00:13:20.909 }' 00:13:20.909 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:20.909 { 00:13:20.909 "nqn": "nqn.2016-06.io.spdk:cnode10494", 00:13:20.909 "min_cntlid": 0, 00:13:20.909 "method": "nvmf_create_subsystem", 00:13:20.909 "req_id": 1 00:13:20.909 } 00:13:20.909 Got JSON-RPC error response 00:13:20.909 response: 00:13:20.909 { 00:13:20.909 "code": -32602, 00:13:20.909 "message": "Invalid cntlid range [0-65519]" 00:13:20.909 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.909 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4090 -i 65520 00:13:21.166 [2024-07-25 07:18:53.503574] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4090: invalid cntlid range [65520-65519] 00:13:21.166 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:21.166 { 00:13:21.166 "nqn": "nqn.2016-06.io.spdk:cnode4090", 00:13:21.166 "min_cntlid": 65520, 00:13:21.166 "method": "nvmf_create_subsystem", 00:13:21.166 "req_id": 1 00:13:21.166 } 00:13:21.166 Got JSON-RPC error response 00:13:21.166 response: 00:13:21.166 { 00:13:21.166 "code": -32602, 00:13:21.166 "message": "Invalid cntlid range [65520-65519]" 00:13:21.166 }' 00:13:21.166 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:21.166 { 00:13:21.166 "nqn": "nqn.2016-06.io.spdk:cnode4090", 00:13:21.166 "min_cntlid": 65520, 00:13:21.166 "method": "nvmf_create_subsystem", 00:13:21.166 "req_id": 1 00:13:21.166 } 00:13:21.166 Got JSON-RPC error response 00:13:21.166 response: 00:13:21.166 { 00:13:21.166 "code": -32602, 00:13:21.166 "message": "Invalid cntlid range [65520-65519]" 00:13:21.166 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.166 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25585 -I 0 00:13:21.425 [2024-07-25 07:18:53.748400] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25585: invalid cntlid range [1-0] 00:13:21.425 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:21.425 { 00:13:21.425 "nqn": "nqn.2016-06.io.spdk:cnode25585", 00:13:21.425 "max_cntlid": 0, 00:13:21.425 "method": "nvmf_create_subsystem", 00:13:21.425 "req_id": 1 00:13:21.425 } 00:13:21.425 Got JSON-RPC error response 00:13:21.425 response: 00:13:21.425 { 00:13:21.425 "code": -32602, 00:13:21.425 "message": "Invalid cntlid range [1-0]" 00:13:21.425 }' 00:13:21.425 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:21.425 { 00:13:21.425 "nqn": "nqn.2016-06.io.spdk:cnode25585", 00:13:21.425 "max_cntlid": 0, 00:13:21.425 "method": "nvmf_create_subsystem", 00:13:21.425 "req_id": 1 00:13:21.425 } 00:13:21.425 Got JSON-RPC error response 00:13:21.425 response: 00:13:21.425 { 00:13:21.425 "code": -32602, 00:13:21.425 "message": "Invalid cntlid range [1-0]" 00:13:21.425 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.425 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31956 -I 65520 00:13:21.707 [2024-07-25 07:18:53.997252] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31956: invalid cntlid range [1-65520] 00:13:21.707 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:21.707 { 00:13:21.707 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:13:21.707 "max_cntlid": 65520, 00:13:21.707 "method": "nvmf_create_subsystem", 00:13:21.707 "req_id": 1 00:13:21.707 } 00:13:21.707 Got JSON-RPC error response 00:13:21.707 response: 00:13:21.707 { 00:13:21.707 "code": -32602, 00:13:21.707 "message": "Invalid cntlid range [1-65520]" 00:13:21.707 }' 00:13:21.707 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:21.707 { 00:13:21.707 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:13:21.707 "max_cntlid": 65520, 00:13:21.707 "method": "nvmf_create_subsystem", 00:13:21.707 "req_id": 1 00:13:21.707 } 00:13:21.707 Got JSON-RPC error response 00:13:21.707 response: 00:13:21.707 { 00:13:21.707 "code": -32602, 00:13:21.707 "message": "Invalid cntlid range [1-65520]" 00:13:21.707 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.707 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20805 -i 6 -I 5 00:13:21.965 [2024-07-25 07:18:54.250091] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20805: invalid cntlid range [6-5] 00:13:21.965 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:21.965 { 00:13:21.965 "nqn": "nqn.2016-06.io.spdk:cnode20805", 00:13:21.965 "min_cntlid": 6, 00:13:21.965 "max_cntlid": 5, 00:13:21.965 "method": "nvmf_create_subsystem", 00:13:21.965 "req_id": 1 00:13:21.965 } 00:13:21.965 Got JSON-RPC error response 00:13:21.965 response: 00:13:21.965 { 00:13:21.965 "code": -32602, 00:13:21.965 "message": "Invalid cntlid range [6-5]" 00:13:21.965 }' 00:13:21.965 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:21.965 { 00:13:21.965 "nqn": "nqn.2016-06.io.spdk:cnode20805", 00:13:21.965 "min_cntlid": 6, 00:13:21.965 "max_cntlid": 5, 00:13:21.965 "method": "nvmf_create_subsystem", 00:13:21.965 "req_id": 1 00:13:21.965 } 00:13:21.965 Got JSON-RPC error response 00:13:21.965 response: 00:13:21.965 { 00:13:21.965 "code": -32602, 00:13:21.965 "message": "Invalid cntlid range [6-5]" 00:13:21.965 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.965 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:21.965 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:21.965 { 00:13:21.965 "name": "foobar", 00:13:21.965 "method": "nvmf_delete_target", 00:13:21.965 "req_id": 1 00:13:21.965 } 00:13:21.965 Got JSON-RPC error response 00:13:21.965 response: 00:13:21.965 { 00:13:21.965 "code": -32602, 00:13:21.965 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:21.965 }' 00:13:21.965 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:21.965 { 00:13:21.965 "name": "foobar", 00:13:21.965 "method": "nvmf_delete_target", 00:13:21.965 "req_id": 1 00:13:21.965 } 00:13:21.965 Got JSON-RPC error response 00:13:21.965 response: 00:13:21.965 { 00:13:21.965 "code": -32602, 00:13:21.965 "message": "The specified target doesn't exist, cannot delete it." 00:13:21.965 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.966 rmmod nvme_tcp 00:13:21.966 rmmod nvme_fabrics 00:13:21.966 rmmod nvme_keyring 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2435633 ']' 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2435633 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2435633 ']' 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2435633 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2435633 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2435633' 00:13:21.966 killing process with pid 2435633 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2435633 00:13:21.966 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2435633 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.224 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.758 00:13:24.758 real 0m9.308s 00:13:24.758 user 0m22.699s 00:13:24.758 sys 0m2.520s 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 ************************************ 00:13:24.758 END TEST nvmf_invalid 00:13:24.758 ************************************ 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 ************************************ 00:13:24.758 START TEST nvmf_connect_stress 00:13:24.758 ************************************ 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.758 * Looking for test storage... 00:13:24.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.758 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.759 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.659 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.659 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.659 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.659 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.659 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.659 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:26.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:26.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:26.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:26.660 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:13:26.660 00:13:26.660 --- 10.0.0.2 ping statistics --- 00:13:26.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.660 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:26.660 00:13:26.660 --- 10.0.0.1 ping statistics --- 00:13:26.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.660 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.660 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2438276 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2438276 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2438276 ']' 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.661 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.661 [2024-07-25 07:18:59.043701] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:26.661 [2024-07-25 07:18:59.043795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.661 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.661 [2024-07-25 07:18:59.109141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:26.919 [2024-07-25 07:18:59.221534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.919 [2024-07-25 07:18:59.221593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.919 [2024-07-25 07:18:59.221607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.919 [2024-07-25 07:18:59.221618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.919 [2024-07-25 07:18:59.221635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.919 [2024-07-25 07:18:59.221722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.919 [2024-07-25 07:18:59.221786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.919 [2024-07-25 07:18:59.221789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.919 [2024-07-25 07:18:59.369825] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.919 [2024-07-25 07:18:59.408334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.919 NULL1 00:13:26.919 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2438307 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.920 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.177 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.435 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:27.435 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.435 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.435 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.693 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.693 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:27.693 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.693 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.693 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.950 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.950 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:27.950 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.950 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.950 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.514 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.514 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:28.514 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.514 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.514 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.771 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.771 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:28.771 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.771 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.771 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.029 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.029 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:29.029 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.029 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.029 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.287 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.287 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:29.287 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.287 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.287 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.544 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.544 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:29.544 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.544 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.544 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.109 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.109 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:30.109 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.109 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.109 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.366 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.366 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:30.366 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.366 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.366 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.622 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.622 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:30.622 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.622 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.622 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.879 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.879 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:30.879 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.879 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.879 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.136 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.136 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:31.136 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.136 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.136 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.699 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.699 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:31.699 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.699 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.699 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.956 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.956 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:31.956 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.956 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.956 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.213 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.213 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:32.213 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.213 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.213 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.470 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.470 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:32.470 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.470 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.470 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.727 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.727 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:32.727 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.727 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.727 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:33.290 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.290 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.548 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.548 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:33.548 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.548 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.548 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.805 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.805 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:33.805 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.805 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.805 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.063 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.063 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:34.063 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.063 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.063 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.628 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.628 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:34.628 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.628 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.628 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.885 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.885 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:34.885 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.885 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.885 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.142 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.143 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:35.143 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.143 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.143 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.400 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.400 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:35.400 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.400 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.400 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.657 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.657 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:35.657 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.657 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.657 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.263 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.263 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:36.263 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.263 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.263 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.521 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.521 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:36.521 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.521 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.521 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:36.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.035 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.035 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:37.035 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.036 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.036 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.036 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2438307 00:13:37.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2438307) - No such process 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2438307 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.293 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.294 rmmod nvme_tcp 00:13:37.294 rmmod nvme_fabrics 00:13:37.294 rmmod nvme_keyring 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2438276 ']' 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2438276 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2438276 ']' 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2438276 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.294 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2438276 00:13:37.552 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:37.552 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:37.552 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2438276' 00:13:37.552 killing process with pid 2438276 00:13:37.552 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2438276 00:13:37.552 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2438276 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.811 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.714 00:13:39.714 real 0m15.315s 00:13:39.714 user 0m38.101s 00:13:39.714 sys 0m6.117s 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.714 ************************************ 00:13:39.714 END TEST nvmf_connect_stress 00:13:39.714 ************************************ 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.714 ************************************ 00:13:39.714 START TEST nvmf_fused_ordering 00:13:39.714 ************************************ 00:13:39.714 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.972 * Looking for test storage... 00:13:39.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.972 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.973 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:41.875 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.875 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:41.876 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:41.876 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:41.876 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.876 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:13:42.135 00:13:42.135 --- 10.0.0.2 ping statistics --- 00:13:42.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.135 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:13:42.135 00:13:42.135 --- 10.0.0.1 ping statistics --- 00:13:42.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.135 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2441469 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2441469 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2441469 ']' 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.135 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.135 [2024-07-25 07:19:14.498737] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:42.135 [2024-07-25 07:19:14.498831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.135 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.135 [2024-07-25 07:19:14.568638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.393 [2024-07-25 07:19:14.687851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.393 [2024-07-25 07:19:14.687908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.393 [2024-07-25 07:19:14.687935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.393 [2024-07-25 07:19:14.687947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.393 [2024-07-25 07:19:14.687967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.393 [2024-07-25 07:19:14.687998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.957 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.957 [2024-07-25 07:19:15.484627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.214 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.214 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.214 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.214 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.214 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.214 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.215 [2024-07-25 07:19:15.500826] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.215 NULL1 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.215 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:43.215 [2024-07-25 07:19:15.546251] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:43.215 [2024-07-25 07:19:15.546298] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441606 ] 00:13:43.215 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.779 Attached to nqn.2016-06.io.spdk:cnode1 00:13:43.779 Namespace ID: 1 size: 1GB 00:13:43.779 fused_ordering(0) 00:13:43.779 fused_ordering(1) 00:13:43.779 fused_ordering(2) 00:13:43.779 fused_ordering(3) 00:13:43.779 fused_ordering(4) 00:13:43.779 fused_ordering(5) 00:13:43.779 fused_ordering(6) 00:13:43.779 fused_ordering(7) 00:13:43.779 fused_ordering(8) 00:13:43.779 fused_ordering(9) 00:13:43.779 fused_ordering(10) 00:13:43.779 fused_ordering(11) 00:13:43.779 fused_ordering(12) 00:13:43.779 fused_ordering(13) 00:13:43.779 fused_ordering(14) 00:13:43.779 fused_ordering(15) 00:13:43.779 fused_ordering(16) 00:13:43.779 fused_ordering(17) 00:13:43.779 fused_ordering(18) 00:13:43.779 fused_ordering(19) 00:13:43.779 fused_ordering(20) 00:13:43.779 fused_ordering(21) 00:13:43.779 fused_ordering(22) 00:13:43.779 fused_ordering(23) 00:13:43.779 fused_ordering(24) 00:13:43.779 fused_ordering(25) 00:13:43.779 fused_ordering(26) 00:13:43.779 fused_ordering(27) 00:13:43.779 fused_ordering(28) 00:13:43.779 fused_ordering(29) 00:13:43.779 fused_ordering(30) 00:13:43.779 fused_ordering(31) 00:13:43.779 fused_ordering(32) 00:13:43.779 fused_ordering(33) 00:13:43.779 fused_ordering(34) 00:13:43.779 fused_ordering(35) 00:13:43.779 fused_ordering(36) 00:13:43.779 fused_ordering(37) 00:13:43.779 fused_ordering(38) 00:13:43.779 fused_ordering(39) 00:13:43.779 fused_ordering(40) 00:13:43.779 fused_ordering(41) 00:13:43.779 fused_ordering(42) 00:13:43.779 fused_ordering(43) 00:13:43.779 fused_ordering(44) 00:13:43.779 fused_ordering(45) 00:13:43.779 fused_ordering(46) 00:13:43.779 fused_ordering(47) 00:13:43.779 fused_ordering(48) 00:13:43.779 fused_ordering(49) 00:13:43.779 fused_ordering(50) 00:13:43.779 fused_ordering(51) 00:13:43.779 fused_ordering(52) 00:13:43.779 fused_ordering(53) 00:13:43.779 fused_ordering(54) 00:13:43.779 fused_ordering(55) 00:13:43.779 fused_ordering(56) 00:13:43.779 fused_ordering(57) 00:13:43.779 fused_ordering(58) 00:13:43.779 fused_ordering(59) 00:13:43.779 fused_ordering(60) 00:13:43.779 fused_ordering(61) 00:13:43.779 fused_ordering(62) 00:13:43.779 fused_ordering(63) 00:13:43.779 fused_ordering(64) 00:13:43.779 fused_ordering(65) 00:13:43.779 fused_ordering(66) 00:13:43.779 fused_ordering(67) 00:13:43.779 fused_ordering(68) 00:13:43.779 fused_ordering(69) 00:13:43.780 fused_ordering(70) 00:13:43.780 fused_ordering(71) 00:13:43.780 fused_ordering(72) 00:13:43.780 fused_ordering(73) 00:13:43.780 fused_ordering(74) 00:13:43.780 fused_ordering(75) 00:13:43.780 fused_ordering(76) 00:13:43.780 fused_ordering(77) 00:13:43.780 fused_ordering(78) 00:13:43.780 fused_ordering(79) 00:13:43.780 fused_ordering(80) 00:13:43.780 fused_ordering(81) 00:13:43.780 fused_ordering(82) 00:13:43.780 fused_ordering(83) 00:13:43.780 fused_ordering(84) 00:13:43.780 fused_ordering(85) 00:13:43.780 fused_ordering(86) 00:13:43.780 fused_ordering(87) 00:13:43.780 fused_ordering(88) 00:13:43.780 fused_ordering(89) 00:13:43.780 fused_ordering(90) 00:13:43.780 fused_ordering(91) 00:13:43.780 fused_ordering(92) 00:13:43.780 fused_ordering(93) 00:13:43.780 fused_ordering(94) 00:13:43.780 fused_ordering(95) 00:13:43.780 fused_ordering(96) 00:13:43.780 fused_ordering(97) 00:13:43.780 fused_ordering(98) 00:13:43.780 fused_ordering(99) 00:13:43.780 fused_ordering(100) 00:13:43.780 fused_ordering(101) 00:13:43.780 fused_ordering(102) 00:13:43.780 fused_ordering(103) 00:13:43.780 fused_ordering(104) 00:13:43.780 fused_ordering(105) 00:13:43.780 fused_ordering(106) 00:13:43.780 fused_ordering(107) 00:13:43.780 fused_ordering(108) 00:13:43.780 fused_ordering(109) 00:13:43.780 fused_ordering(110) 00:13:43.780 fused_ordering(111) 00:13:43.780 fused_ordering(112) 00:13:43.780 fused_ordering(113) 00:13:43.780 fused_ordering(114) 00:13:43.780 fused_ordering(115) 00:13:43.780 fused_ordering(116) 00:13:43.780 fused_ordering(117) 00:13:43.780 fused_ordering(118) 00:13:43.780 fused_ordering(119) 00:13:43.780 fused_ordering(120) 00:13:43.780 fused_ordering(121) 00:13:43.780 fused_ordering(122) 00:13:43.780 fused_ordering(123) 00:13:43.780 fused_ordering(124) 00:13:43.780 fused_ordering(125) 00:13:43.780 fused_ordering(126) 00:13:43.780 fused_ordering(127) 00:13:43.780 fused_ordering(128) 00:13:43.780 fused_ordering(129) 00:13:43.780 fused_ordering(130) 00:13:43.780 fused_ordering(131) 00:13:43.780 fused_ordering(132) 00:13:43.780 fused_ordering(133) 00:13:43.780 fused_ordering(134) 00:13:43.780 fused_ordering(135) 00:13:43.780 fused_ordering(136) 00:13:43.780 fused_ordering(137) 00:13:43.780 fused_ordering(138) 00:13:43.780 fused_ordering(139) 00:13:43.780 fused_ordering(140) 00:13:43.780 fused_ordering(141) 00:13:43.780 fused_ordering(142) 00:13:43.780 fused_ordering(143) 00:13:43.780 fused_ordering(144) 00:13:43.780 fused_ordering(145) 00:13:43.780 fused_ordering(146) 00:13:43.780 fused_ordering(147) 00:13:43.780 fused_ordering(148) 00:13:43.780 fused_ordering(149) 00:13:43.780 fused_ordering(150) 00:13:43.780 fused_ordering(151) 00:13:43.780 fused_ordering(152) 00:13:43.780 fused_ordering(153) 00:13:43.780 fused_ordering(154) 00:13:43.780 fused_ordering(155) 00:13:43.780 fused_ordering(156) 00:13:43.780 fused_ordering(157) 00:13:43.780 fused_ordering(158) 00:13:43.780 fused_ordering(159) 00:13:43.780 fused_ordering(160) 00:13:43.780 fused_ordering(161) 00:13:43.780 fused_ordering(162) 00:13:43.780 fused_ordering(163) 00:13:43.780 fused_ordering(164) 00:13:43.780 fused_ordering(165) 00:13:43.780 fused_ordering(166) 00:13:43.780 fused_ordering(167) 00:13:43.780 fused_ordering(168) 00:13:43.780 fused_ordering(169) 00:13:43.780 fused_ordering(170) 00:13:43.780 fused_ordering(171) 00:13:43.780 fused_ordering(172) 00:13:43.780 fused_ordering(173) 00:13:43.780 fused_ordering(174) 00:13:43.780 fused_ordering(175) 00:13:43.780 fused_ordering(176) 00:13:43.780 fused_ordering(177) 00:13:43.780 fused_ordering(178) 00:13:43.780 fused_ordering(179) 00:13:43.780 fused_ordering(180) 00:13:43.780 fused_ordering(181) 00:13:43.780 fused_ordering(182) 00:13:43.780 fused_ordering(183) 00:13:43.780 fused_ordering(184) 00:13:43.780 fused_ordering(185) 00:13:43.780 fused_ordering(186) 00:13:43.780 fused_ordering(187) 00:13:43.780 fused_ordering(188) 00:13:43.780 fused_ordering(189) 00:13:43.780 fused_ordering(190) 00:13:43.780 fused_ordering(191) 00:13:43.780 fused_ordering(192) 00:13:43.780 fused_ordering(193) 00:13:43.780 fused_ordering(194) 00:13:43.780 fused_ordering(195) 00:13:43.780 fused_ordering(196) 00:13:43.780 fused_ordering(197) 00:13:43.780 fused_ordering(198) 00:13:43.780 fused_ordering(199) 00:13:43.780 fused_ordering(200) 00:13:43.780 fused_ordering(201) 00:13:43.780 fused_ordering(202) 00:13:43.780 fused_ordering(203) 00:13:43.780 fused_ordering(204) 00:13:43.780 fused_ordering(205) 00:13:44.038 fused_ordering(206) 00:13:44.038 fused_ordering(207) 00:13:44.038 fused_ordering(208) 00:13:44.038 fused_ordering(209) 00:13:44.038 fused_ordering(210) 00:13:44.038 fused_ordering(211) 00:13:44.038 fused_ordering(212) 00:13:44.038 fused_ordering(213) 00:13:44.038 fused_ordering(214) 00:13:44.038 fused_ordering(215) 00:13:44.038 fused_ordering(216) 00:13:44.038 fused_ordering(217) 00:13:44.038 fused_ordering(218) 00:13:44.038 fused_ordering(219) 00:13:44.038 fused_ordering(220) 00:13:44.038 fused_ordering(221) 00:13:44.039 fused_ordering(222) 00:13:44.039 fused_ordering(223) 00:13:44.039 fused_ordering(224) 00:13:44.039 fused_ordering(225) 00:13:44.039 fused_ordering(226) 00:13:44.039 fused_ordering(227) 00:13:44.039 fused_ordering(228) 00:13:44.039 fused_ordering(229) 00:13:44.039 fused_ordering(230) 00:13:44.039 fused_ordering(231) 00:13:44.039 fused_ordering(232) 00:13:44.039 fused_ordering(233) 00:13:44.039 fused_ordering(234) 00:13:44.039 fused_ordering(235) 00:13:44.039 fused_ordering(236) 00:13:44.039 fused_ordering(237) 00:13:44.039 fused_ordering(238) 00:13:44.039 fused_ordering(239) 00:13:44.039 fused_ordering(240) 00:13:44.039 fused_ordering(241) 00:13:44.039 fused_ordering(242) 00:13:44.039 fused_ordering(243) 00:13:44.039 fused_ordering(244) 00:13:44.039 fused_ordering(245) 00:13:44.039 fused_ordering(246) 00:13:44.039 fused_ordering(247) 00:13:44.039 fused_ordering(248) 00:13:44.039 fused_ordering(249) 00:13:44.039 fused_ordering(250) 00:13:44.039 fused_ordering(251) 00:13:44.039 fused_ordering(252) 00:13:44.039 fused_ordering(253) 00:13:44.039 fused_ordering(254) 00:13:44.039 fused_ordering(255) 00:13:44.039 fused_ordering(256) 00:13:44.039 fused_ordering(257) 00:13:44.039 fused_ordering(258) 00:13:44.039 fused_ordering(259) 00:13:44.039 fused_ordering(260) 00:13:44.039 fused_ordering(261) 00:13:44.039 fused_ordering(262) 00:13:44.039 fused_ordering(263) 00:13:44.039 fused_ordering(264) 00:13:44.039 fused_ordering(265) 00:13:44.039 fused_ordering(266) 00:13:44.039 fused_ordering(267) 00:13:44.039 fused_ordering(268) 00:13:44.039 fused_ordering(269) 00:13:44.039 fused_ordering(270) 00:13:44.039 fused_ordering(271) 00:13:44.039 fused_ordering(272) 00:13:44.039 fused_ordering(273) 00:13:44.039 fused_ordering(274) 00:13:44.039 fused_ordering(275) 00:13:44.039 fused_ordering(276) 00:13:44.039 fused_ordering(277) 00:13:44.039 fused_ordering(278) 00:13:44.039 fused_ordering(279) 00:13:44.039 fused_ordering(280) 00:13:44.039 fused_ordering(281) 00:13:44.039 fused_ordering(282) 00:13:44.039 fused_ordering(283) 00:13:44.039 fused_ordering(284) 00:13:44.039 fused_ordering(285) 00:13:44.039 fused_ordering(286) 00:13:44.039 fused_ordering(287) 00:13:44.039 fused_ordering(288) 00:13:44.039 fused_ordering(289) 00:13:44.039 fused_ordering(290) 00:13:44.039 fused_ordering(291) 00:13:44.039 fused_ordering(292) 00:13:44.039 fused_ordering(293) 00:13:44.039 fused_ordering(294) 00:13:44.039 fused_ordering(295) 00:13:44.039 fused_ordering(296) 00:13:44.039 fused_ordering(297) 00:13:44.039 fused_ordering(298) 00:13:44.039 fused_ordering(299) 00:13:44.039 fused_ordering(300) 00:13:44.039 fused_ordering(301) 00:13:44.039 fused_ordering(302) 00:13:44.039 fused_ordering(303) 00:13:44.039 fused_ordering(304) 00:13:44.039 fused_ordering(305) 00:13:44.039 fused_ordering(306) 00:13:44.039 fused_ordering(307) 00:13:44.039 fused_ordering(308) 00:13:44.039 fused_ordering(309) 00:13:44.039 fused_ordering(310) 00:13:44.039 fused_ordering(311) 00:13:44.039 fused_ordering(312) 00:13:44.039 fused_ordering(313) 00:13:44.039 fused_ordering(314) 00:13:44.039 fused_ordering(315) 00:13:44.039 fused_ordering(316) 00:13:44.039 fused_ordering(317) 00:13:44.039 fused_ordering(318) 00:13:44.039 fused_ordering(319) 00:13:44.039 fused_ordering(320) 00:13:44.039 fused_ordering(321) 00:13:44.039 fused_ordering(322) 00:13:44.039 fused_ordering(323) 00:13:44.039 fused_ordering(324) 00:13:44.039 fused_ordering(325) 00:13:44.039 fused_ordering(326) 00:13:44.039 fused_ordering(327) 00:13:44.039 fused_ordering(328) 00:13:44.039 fused_ordering(329) 00:13:44.039 fused_ordering(330) 00:13:44.039 fused_ordering(331) 00:13:44.039 fused_ordering(332) 00:13:44.039 fused_ordering(333) 00:13:44.039 fused_ordering(334) 00:13:44.039 fused_ordering(335) 00:13:44.039 fused_ordering(336) 00:13:44.039 fused_ordering(337) 00:13:44.039 fused_ordering(338) 00:13:44.039 fused_ordering(339) 00:13:44.039 fused_ordering(340) 00:13:44.039 fused_ordering(341) 00:13:44.039 fused_ordering(342) 00:13:44.039 fused_ordering(343) 00:13:44.039 fused_ordering(344) 00:13:44.039 fused_ordering(345) 00:13:44.039 fused_ordering(346) 00:13:44.039 fused_ordering(347) 00:13:44.039 fused_ordering(348) 00:13:44.039 fused_ordering(349) 00:13:44.039 fused_ordering(350) 00:13:44.039 fused_ordering(351) 00:13:44.039 fused_ordering(352) 00:13:44.039 fused_ordering(353) 00:13:44.039 fused_ordering(354) 00:13:44.039 fused_ordering(355) 00:13:44.039 fused_ordering(356) 00:13:44.039 fused_ordering(357) 00:13:44.039 fused_ordering(358) 00:13:44.039 fused_ordering(359) 00:13:44.039 fused_ordering(360) 00:13:44.039 fused_ordering(361) 00:13:44.039 fused_ordering(362) 00:13:44.039 fused_ordering(363) 00:13:44.039 fused_ordering(364) 00:13:44.039 fused_ordering(365) 00:13:44.039 fused_ordering(366) 00:13:44.039 fused_ordering(367) 00:13:44.039 fused_ordering(368) 00:13:44.039 fused_ordering(369) 00:13:44.039 fused_ordering(370) 00:13:44.039 fused_ordering(371) 00:13:44.039 fused_ordering(372) 00:13:44.039 fused_ordering(373) 00:13:44.039 fused_ordering(374) 00:13:44.039 fused_ordering(375) 00:13:44.039 fused_ordering(376) 00:13:44.039 fused_ordering(377) 00:13:44.039 fused_ordering(378) 00:13:44.039 fused_ordering(379) 00:13:44.039 fused_ordering(380) 00:13:44.039 fused_ordering(381) 00:13:44.039 fused_ordering(382) 00:13:44.039 fused_ordering(383) 00:13:44.039 fused_ordering(384) 00:13:44.039 fused_ordering(385) 00:13:44.039 fused_ordering(386) 00:13:44.039 fused_ordering(387) 00:13:44.039 fused_ordering(388) 00:13:44.039 fused_ordering(389) 00:13:44.039 fused_ordering(390) 00:13:44.039 fused_ordering(391) 00:13:44.039 fused_ordering(392) 00:13:44.039 fused_ordering(393) 00:13:44.039 fused_ordering(394) 00:13:44.039 fused_ordering(395) 00:13:44.039 fused_ordering(396) 00:13:44.039 fused_ordering(397) 00:13:44.039 fused_ordering(398) 00:13:44.039 fused_ordering(399) 00:13:44.039 fused_ordering(400) 00:13:44.039 fused_ordering(401) 00:13:44.039 fused_ordering(402) 00:13:44.039 fused_ordering(403) 00:13:44.039 fused_ordering(404) 00:13:44.039 fused_ordering(405) 00:13:44.039 fused_ordering(406) 00:13:44.039 fused_ordering(407) 00:13:44.039 fused_ordering(408) 00:13:44.039 fused_ordering(409) 00:13:44.039 fused_ordering(410) 00:13:44.604 fused_ordering(411) 00:13:44.604 fused_ordering(412) 00:13:44.604 fused_ordering(413) 00:13:44.604 fused_ordering(414) 00:13:44.604 fused_ordering(415) 00:13:44.604 fused_ordering(416) 00:13:44.604 fused_ordering(417) 00:13:44.604 fused_ordering(418) 00:13:44.604 fused_ordering(419) 00:13:44.604 fused_ordering(420) 00:13:44.604 fused_ordering(421) 00:13:44.604 fused_ordering(422) 00:13:44.604 fused_ordering(423) 00:13:44.604 fused_ordering(424) 00:13:44.604 fused_ordering(425) 00:13:44.604 fused_ordering(426) 00:13:44.604 fused_ordering(427) 00:13:44.604 fused_ordering(428) 00:13:44.604 fused_ordering(429) 00:13:44.604 fused_ordering(430) 00:13:44.604 fused_ordering(431) 00:13:44.604 fused_ordering(432) 00:13:44.604 fused_ordering(433) 00:13:44.604 fused_ordering(434) 00:13:44.604 fused_ordering(435) 00:13:44.604 fused_ordering(436) 00:13:44.604 fused_ordering(437) 00:13:44.605 fused_ordering(438) 00:13:44.605 fused_ordering(439) 00:13:44.605 fused_ordering(440) 00:13:44.605 fused_ordering(441) 00:13:44.605 fused_ordering(442) 00:13:44.605 fused_ordering(443) 00:13:44.605 fused_ordering(444) 00:13:44.605 fused_ordering(445) 00:13:44.605 fused_ordering(446) 00:13:44.605 fused_ordering(447) 00:13:44.605 fused_ordering(448) 00:13:44.605 fused_ordering(449) 00:13:44.605 fused_ordering(450) 00:13:44.605 fused_ordering(451) 00:13:44.605 fused_ordering(452) 00:13:44.605 fused_ordering(453) 00:13:44.605 fused_ordering(454) 00:13:44.605 fused_ordering(455) 00:13:44.605 fused_ordering(456) 00:13:44.605 fused_ordering(457) 00:13:44.605 fused_ordering(458) 00:13:44.605 fused_ordering(459) 00:13:44.605 fused_ordering(460) 00:13:44.605 fused_ordering(461) 00:13:44.605 fused_ordering(462) 00:13:44.605 fused_ordering(463) 00:13:44.605 fused_ordering(464) 00:13:44.605 fused_ordering(465) 00:13:44.605 fused_ordering(466) 00:13:44.605 fused_ordering(467) 00:13:44.605 fused_ordering(468) 00:13:44.605 fused_ordering(469) 00:13:44.605 fused_ordering(470) 00:13:44.605 fused_ordering(471) 00:13:44.605 fused_ordering(472) 00:13:44.605 fused_ordering(473) 00:13:44.605 fused_ordering(474) 00:13:44.605 fused_ordering(475) 00:13:44.605 fused_ordering(476) 00:13:44.605 fused_ordering(477) 00:13:44.605 fused_ordering(478) 00:13:44.605 fused_ordering(479) 00:13:44.605 fused_ordering(480) 00:13:44.605 fused_ordering(481) 00:13:44.605 fused_ordering(482) 00:13:44.605 fused_ordering(483) 00:13:44.605 fused_ordering(484) 00:13:44.605 fused_ordering(485) 00:13:44.605 fused_ordering(486) 00:13:44.605 fused_ordering(487) 00:13:44.605 fused_ordering(488) 00:13:44.605 fused_ordering(489) 00:13:44.605 fused_ordering(490) 00:13:44.605 fused_ordering(491) 00:13:44.605 fused_ordering(492) 00:13:44.605 fused_ordering(493) 00:13:44.605 fused_ordering(494) 00:13:44.605 fused_ordering(495) 00:13:44.605 fused_ordering(496) 00:13:44.605 fused_ordering(497) 00:13:44.605 fused_ordering(498) 00:13:44.605 fused_ordering(499) 00:13:44.605 fused_ordering(500) 00:13:44.605 fused_ordering(501) 00:13:44.605 fused_ordering(502) 00:13:44.605 fused_ordering(503) 00:13:44.605 fused_ordering(504) 00:13:44.605 fused_ordering(505) 00:13:44.605 fused_ordering(506) 00:13:44.605 fused_ordering(507) 00:13:44.605 fused_ordering(508) 00:13:44.605 fused_ordering(509) 00:13:44.605 fused_ordering(510) 00:13:44.605 fused_ordering(511) 00:13:44.605 fused_ordering(512) 00:13:44.605 fused_ordering(513) 00:13:44.605 fused_ordering(514) 00:13:44.605 fused_ordering(515) 00:13:44.605 fused_ordering(516) 00:13:44.605 fused_ordering(517) 00:13:44.605 fused_ordering(518) 00:13:44.605 fused_ordering(519) 00:13:44.605 fused_ordering(520) 00:13:44.605 fused_ordering(521) 00:13:44.605 fused_ordering(522) 00:13:44.605 fused_ordering(523) 00:13:44.605 fused_ordering(524) 00:13:44.605 fused_ordering(525) 00:13:44.605 fused_ordering(526) 00:13:44.605 fused_ordering(527) 00:13:44.605 fused_ordering(528) 00:13:44.605 fused_ordering(529) 00:13:44.605 fused_ordering(530) 00:13:44.605 fused_ordering(531) 00:13:44.605 fused_ordering(532) 00:13:44.605 fused_ordering(533) 00:13:44.605 fused_ordering(534) 00:13:44.605 fused_ordering(535) 00:13:44.605 fused_ordering(536) 00:13:44.605 fused_ordering(537) 00:13:44.605 fused_ordering(538) 00:13:44.605 fused_ordering(539) 00:13:44.605 fused_ordering(540) 00:13:44.605 fused_ordering(541) 00:13:44.605 fused_ordering(542) 00:13:44.605 fused_ordering(543) 00:13:44.605 fused_ordering(544) 00:13:44.605 fused_ordering(545) 00:13:44.605 fused_ordering(546) 00:13:44.605 fused_ordering(547) 00:13:44.605 fused_ordering(548) 00:13:44.605 fused_ordering(549) 00:13:44.605 fused_ordering(550) 00:13:44.605 fused_ordering(551) 00:13:44.605 fused_ordering(552) 00:13:44.605 fused_ordering(553) 00:13:44.605 fused_ordering(554) 00:13:44.605 fused_ordering(555) 00:13:44.605 fused_ordering(556) 00:13:44.605 fused_ordering(557) 00:13:44.605 fused_ordering(558) 00:13:44.605 fused_ordering(559) 00:13:44.605 fused_ordering(560) 00:13:44.605 fused_ordering(561) 00:13:44.605 fused_ordering(562) 00:13:44.605 fused_ordering(563) 00:13:44.605 fused_ordering(564) 00:13:44.605 fused_ordering(565) 00:13:44.605 fused_ordering(566) 00:13:44.605 fused_ordering(567) 00:13:44.605 fused_ordering(568) 00:13:44.605 fused_ordering(569) 00:13:44.605 fused_ordering(570) 00:13:44.605 fused_ordering(571) 00:13:44.605 fused_ordering(572) 00:13:44.605 fused_ordering(573) 00:13:44.605 fused_ordering(574) 00:13:44.605 fused_ordering(575) 00:13:44.605 fused_ordering(576) 00:13:44.605 fused_ordering(577) 00:13:44.605 fused_ordering(578) 00:13:44.605 fused_ordering(579) 00:13:44.605 fused_ordering(580) 00:13:44.605 fused_ordering(581) 00:13:44.605 fused_ordering(582) 00:13:44.605 fused_ordering(583) 00:13:44.605 fused_ordering(584) 00:13:44.605 fused_ordering(585) 00:13:44.605 fused_ordering(586) 00:13:44.605 fused_ordering(587) 00:13:44.605 fused_ordering(588) 00:13:44.605 fused_ordering(589) 00:13:44.605 fused_ordering(590) 00:13:44.605 fused_ordering(591) 00:13:44.605 fused_ordering(592) 00:13:44.605 fused_ordering(593) 00:13:44.605 fused_ordering(594) 00:13:44.605 fused_ordering(595) 00:13:44.605 fused_ordering(596) 00:13:44.605 fused_ordering(597) 00:13:44.605 fused_ordering(598) 00:13:44.605 fused_ordering(599) 00:13:44.605 fused_ordering(600) 00:13:44.605 fused_ordering(601) 00:13:44.605 fused_ordering(602) 00:13:44.605 fused_ordering(603) 00:13:44.605 fused_ordering(604) 00:13:44.605 fused_ordering(605) 00:13:44.605 fused_ordering(606) 00:13:44.605 fused_ordering(607) 00:13:44.605 fused_ordering(608) 00:13:44.605 fused_ordering(609) 00:13:44.605 fused_ordering(610) 00:13:44.605 fused_ordering(611) 00:13:44.605 fused_ordering(612) 00:13:44.605 fused_ordering(613) 00:13:44.605 fused_ordering(614) 00:13:44.605 fused_ordering(615) 00:13:45.170 fused_ordering(616) 00:13:45.170 fused_ordering(617) 00:13:45.170 fused_ordering(618) 00:13:45.170 fused_ordering(619) 00:13:45.170 fused_ordering(620) 00:13:45.170 fused_ordering(621) 00:13:45.170 fused_ordering(622) 00:13:45.170 fused_ordering(623) 00:13:45.170 fused_ordering(624) 00:13:45.170 fused_ordering(625) 00:13:45.170 fused_ordering(626) 00:13:45.170 fused_ordering(627) 00:13:45.170 fused_ordering(628) 00:13:45.170 fused_ordering(629) 00:13:45.170 fused_ordering(630) 00:13:45.170 fused_ordering(631) 00:13:45.170 fused_ordering(632) 00:13:45.170 fused_ordering(633) 00:13:45.170 fused_ordering(634) 00:13:45.170 fused_ordering(635) 00:13:45.170 fused_ordering(636) 00:13:45.170 fused_ordering(637) 00:13:45.170 fused_ordering(638) 00:13:45.170 fused_ordering(639) 00:13:45.170 fused_ordering(640) 00:13:45.170 fused_ordering(641) 00:13:45.170 fused_ordering(642) 00:13:45.170 fused_ordering(643) 00:13:45.170 fused_ordering(644) 00:13:45.170 fused_ordering(645) 00:13:45.171 fused_ordering(646) 00:13:45.171 fused_ordering(647) 00:13:45.171 fused_ordering(648) 00:13:45.171 fused_ordering(649) 00:13:45.171 fused_ordering(650) 00:13:45.171 fused_ordering(651) 00:13:45.171 fused_ordering(652) 00:13:45.171 fused_ordering(653) 00:13:45.171 fused_ordering(654) 00:13:45.171 fused_ordering(655) 00:13:45.171 fused_ordering(656) 00:13:45.171 fused_ordering(657) 00:13:45.171 fused_ordering(658) 00:13:45.171 fused_ordering(659) 00:13:45.171 fused_ordering(660) 00:13:45.171 fused_ordering(661) 00:13:45.171 fused_ordering(662) 00:13:45.171 fused_ordering(663) 00:13:45.171 fused_ordering(664) 00:13:45.171 fused_ordering(665) 00:13:45.171 fused_ordering(666) 00:13:45.171 fused_ordering(667) 00:13:45.171 fused_ordering(668) 00:13:45.171 fused_ordering(669) 00:13:45.171 fused_ordering(670) 00:13:45.171 fused_ordering(671) 00:13:45.171 fused_ordering(672) 00:13:45.171 fused_ordering(673) 00:13:45.171 fused_ordering(674) 00:13:45.171 fused_ordering(675) 00:13:45.171 fused_ordering(676) 00:13:45.171 fused_ordering(677) 00:13:45.171 fused_ordering(678) 00:13:45.171 fused_ordering(679) 00:13:45.171 fused_ordering(680) 00:13:45.171 fused_ordering(681) 00:13:45.171 fused_ordering(682) 00:13:45.171 fused_ordering(683) 00:13:45.171 fused_ordering(684) 00:13:45.171 fused_ordering(685) 00:13:45.171 fused_ordering(686) 00:13:45.171 fused_ordering(687) 00:13:45.171 fused_ordering(688) 00:13:45.171 fused_ordering(689) 00:13:45.171 fused_ordering(690) 00:13:45.171 fused_ordering(691) 00:13:45.171 fused_ordering(692) 00:13:45.171 fused_ordering(693) 00:13:45.171 fused_ordering(694) 00:13:45.171 fused_ordering(695) 00:13:45.171 fused_ordering(696) 00:13:45.171 fused_ordering(697) 00:13:45.171 fused_ordering(698) 00:13:45.171 fused_ordering(699) 00:13:45.171 fused_ordering(700) 00:13:45.171 fused_ordering(701) 00:13:45.171 fused_ordering(702) 00:13:45.171 fused_ordering(703) 00:13:45.171 fused_ordering(704) 00:13:45.171 fused_ordering(705) 00:13:45.171 fused_ordering(706) 00:13:45.171 fused_ordering(707) 00:13:45.171 fused_ordering(708) 00:13:45.171 fused_ordering(709) 00:13:45.171 fused_ordering(710) 00:13:45.171 fused_ordering(711) 00:13:45.171 fused_ordering(712) 00:13:45.171 fused_ordering(713) 00:13:45.171 fused_ordering(714) 00:13:45.171 fused_ordering(715) 00:13:45.171 fused_ordering(716) 00:13:45.171 fused_ordering(717) 00:13:45.171 fused_ordering(718) 00:13:45.171 fused_ordering(719) 00:13:45.171 fused_ordering(720) 00:13:45.171 fused_ordering(721) 00:13:45.171 fused_ordering(722) 00:13:45.171 fused_ordering(723) 00:13:45.171 fused_ordering(724) 00:13:45.171 fused_ordering(725) 00:13:45.171 fused_ordering(726) 00:13:45.171 fused_ordering(727) 00:13:45.171 fused_ordering(728) 00:13:45.171 fused_ordering(729) 00:13:45.171 fused_ordering(730) 00:13:45.171 fused_ordering(731) 00:13:45.171 fused_ordering(732) 00:13:45.171 fused_ordering(733) 00:13:45.171 fused_ordering(734) 00:13:45.171 fused_ordering(735) 00:13:45.171 fused_ordering(736) 00:13:45.171 fused_ordering(737) 00:13:45.171 fused_ordering(738) 00:13:45.171 fused_ordering(739) 00:13:45.171 fused_ordering(740) 00:13:45.171 fused_ordering(741) 00:13:45.171 fused_ordering(742) 00:13:45.171 fused_ordering(743) 00:13:45.171 fused_ordering(744) 00:13:45.171 fused_ordering(745) 00:13:45.171 fused_ordering(746) 00:13:45.171 fused_ordering(747) 00:13:45.171 fused_ordering(748) 00:13:45.171 fused_ordering(749) 00:13:45.171 fused_ordering(750) 00:13:45.171 fused_ordering(751) 00:13:45.171 fused_ordering(752) 00:13:45.171 fused_ordering(753) 00:13:45.171 fused_ordering(754) 00:13:45.171 fused_ordering(755) 00:13:45.171 fused_ordering(756) 00:13:45.171 fused_ordering(757) 00:13:45.171 fused_ordering(758) 00:13:45.171 fused_ordering(759) 00:13:45.171 fused_ordering(760) 00:13:45.171 fused_ordering(761) 00:13:45.171 fused_ordering(762) 00:13:45.171 fused_ordering(763) 00:13:45.171 fused_ordering(764) 00:13:45.171 fused_ordering(765) 00:13:45.171 fused_ordering(766) 00:13:45.171 fused_ordering(767) 00:13:45.171 fused_ordering(768) 00:13:45.171 fused_ordering(769) 00:13:45.171 fused_ordering(770) 00:13:45.171 fused_ordering(771) 00:13:45.171 fused_ordering(772) 00:13:45.171 fused_ordering(773) 00:13:45.171 fused_ordering(774) 00:13:45.171 fused_ordering(775) 00:13:45.171 fused_ordering(776) 00:13:45.171 fused_ordering(777) 00:13:45.171 fused_ordering(778) 00:13:45.171 fused_ordering(779) 00:13:45.171 fused_ordering(780) 00:13:45.171 fused_ordering(781) 00:13:45.171 fused_ordering(782) 00:13:45.171 fused_ordering(783) 00:13:45.171 fused_ordering(784) 00:13:45.171 fused_ordering(785) 00:13:45.171 fused_ordering(786) 00:13:45.171 fused_ordering(787) 00:13:45.171 fused_ordering(788) 00:13:45.171 fused_ordering(789) 00:13:45.171 fused_ordering(790) 00:13:45.171 fused_ordering(791) 00:13:45.171 fused_ordering(792) 00:13:45.171 fused_ordering(793) 00:13:45.171 fused_ordering(794) 00:13:45.171 fused_ordering(795) 00:13:45.171 fused_ordering(796) 00:13:45.171 fused_ordering(797) 00:13:45.171 fused_ordering(798) 00:13:45.171 fused_ordering(799) 00:13:45.171 fused_ordering(800) 00:13:45.171 fused_ordering(801) 00:13:45.171 fused_ordering(802) 00:13:45.171 fused_ordering(803) 00:13:45.171 fused_ordering(804) 00:13:45.171 fused_ordering(805) 00:13:45.171 fused_ordering(806) 00:13:45.171 fused_ordering(807) 00:13:45.171 fused_ordering(808) 00:13:45.171 fused_ordering(809) 00:13:45.171 fused_ordering(810) 00:13:45.171 fused_ordering(811) 00:13:45.171 fused_ordering(812) 00:13:45.171 fused_ordering(813) 00:13:45.171 fused_ordering(814) 00:13:45.171 fused_ordering(815) 00:13:45.171 fused_ordering(816) 00:13:45.171 fused_ordering(817) 00:13:45.171 fused_ordering(818) 00:13:45.171 fused_ordering(819) 00:13:45.171 fused_ordering(820) 00:13:46.104 fused_ordering(821) 00:13:46.104 fused_ordering(822) 00:13:46.104 fused_ordering(823) 00:13:46.104 fused_ordering(824) 00:13:46.104 fused_ordering(825) 00:13:46.104 fused_ordering(826) 00:13:46.104 fused_ordering(827) 00:13:46.104 fused_ordering(828) 00:13:46.104 fused_ordering(829) 00:13:46.104 fused_ordering(830) 00:13:46.104 fused_ordering(831) 00:13:46.104 fused_ordering(832) 00:13:46.104 fused_ordering(833) 00:13:46.104 fused_ordering(834) 00:13:46.104 fused_ordering(835) 00:13:46.104 fused_ordering(836) 00:13:46.104 fused_ordering(837) 00:13:46.104 fused_ordering(838) 00:13:46.104 fused_ordering(839) 00:13:46.104 fused_ordering(840) 00:13:46.104 fused_ordering(841) 00:13:46.104 fused_ordering(842) 00:13:46.104 fused_ordering(843) 00:13:46.104 fused_ordering(844) 00:13:46.104 fused_ordering(845) 00:13:46.104 fused_ordering(846) 00:13:46.104 fused_ordering(847) 00:13:46.104 fused_ordering(848) 00:13:46.104 fused_ordering(849) 00:13:46.104 fused_ordering(850) 00:13:46.104 fused_ordering(851) 00:13:46.104 fused_ordering(852) 00:13:46.104 fused_ordering(853) 00:13:46.104 fused_ordering(854) 00:13:46.104 fused_ordering(855) 00:13:46.104 fused_ordering(856) 00:13:46.104 fused_ordering(857) 00:13:46.104 fused_ordering(858) 00:13:46.104 fused_ordering(859) 00:13:46.104 fused_ordering(860) 00:13:46.104 fused_ordering(861) 00:13:46.104 fused_ordering(862) 00:13:46.104 fused_ordering(863) 00:13:46.104 fused_ordering(864) 00:13:46.104 fused_ordering(865) 00:13:46.104 fused_ordering(866) 00:13:46.104 fused_ordering(867) 00:13:46.104 fused_ordering(868) 00:13:46.104 fused_ordering(869) 00:13:46.104 fused_ordering(870) 00:13:46.104 fused_ordering(871) 00:13:46.104 fused_ordering(872) 00:13:46.104 fused_ordering(873) 00:13:46.104 fused_ordering(874) 00:13:46.104 fused_ordering(875) 00:13:46.104 fused_ordering(876) 00:13:46.104 fused_ordering(877) 00:13:46.104 fused_ordering(878) 00:13:46.104 fused_ordering(879) 00:13:46.104 fused_ordering(880) 00:13:46.104 fused_ordering(881) 00:13:46.104 fused_ordering(882) 00:13:46.104 fused_ordering(883) 00:13:46.104 fused_ordering(884) 00:13:46.104 fused_ordering(885) 00:13:46.104 fused_ordering(886) 00:13:46.104 fused_ordering(887) 00:13:46.104 fused_ordering(888) 00:13:46.104 fused_ordering(889) 00:13:46.104 fused_ordering(890) 00:13:46.104 fused_ordering(891) 00:13:46.104 fused_ordering(892) 00:13:46.104 fused_ordering(893) 00:13:46.104 fused_ordering(894) 00:13:46.104 fused_ordering(895) 00:13:46.104 fused_ordering(896) 00:13:46.104 fused_ordering(897) 00:13:46.104 fused_ordering(898) 00:13:46.104 fused_ordering(899) 00:13:46.104 fused_ordering(900) 00:13:46.104 fused_ordering(901) 00:13:46.104 fused_ordering(902) 00:13:46.104 fused_ordering(903) 00:13:46.104 fused_ordering(904) 00:13:46.104 fused_ordering(905) 00:13:46.104 fused_ordering(906) 00:13:46.104 fused_ordering(907) 00:13:46.104 fused_ordering(908) 00:13:46.104 fused_ordering(909) 00:13:46.104 fused_ordering(910) 00:13:46.104 fused_ordering(911) 00:13:46.104 fused_ordering(912) 00:13:46.104 fused_ordering(913) 00:13:46.104 fused_ordering(914) 00:13:46.104 fused_ordering(915) 00:13:46.104 fused_ordering(916) 00:13:46.104 fused_ordering(917) 00:13:46.104 fused_ordering(918) 00:13:46.104 fused_ordering(919) 00:13:46.104 fused_ordering(920) 00:13:46.104 fused_ordering(921) 00:13:46.104 fused_ordering(922) 00:13:46.104 fused_ordering(923) 00:13:46.104 fused_ordering(924) 00:13:46.104 fused_ordering(925) 00:13:46.104 fused_ordering(926) 00:13:46.104 fused_ordering(927) 00:13:46.104 fused_ordering(928) 00:13:46.104 fused_ordering(929) 00:13:46.104 fused_ordering(930) 00:13:46.104 fused_ordering(931) 00:13:46.104 fused_ordering(932) 00:13:46.104 fused_ordering(933) 00:13:46.104 fused_ordering(934) 00:13:46.104 fused_ordering(935) 00:13:46.104 fused_ordering(936) 00:13:46.104 fused_ordering(937) 00:13:46.104 fused_ordering(938) 00:13:46.104 fused_ordering(939) 00:13:46.104 fused_ordering(940) 00:13:46.104 fused_ordering(941) 00:13:46.104 fused_ordering(942) 00:13:46.104 fused_ordering(943) 00:13:46.104 fused_ordering(944) 00:13:46.104 fused_ordering(945) 00:13:46.104 fused_ordering(946) 00:13:46.104 fused_ordering(947) 00:13:46.104 fused_ordering(948) 00:13:46.104 fused_ordering(949) 00:13:46.104 fused_ordering(950) 00:13:46.104 fused_ordering(951) 00:13:46.104 fused_ordering(952) 00:13:46.104 fused_ordering(953) 00:13:46.104 fused_ordering(954) 00:13:46.104 fused_ordering(955) 00:13:46.104 fused_ordering(956) 00:13:46.104 fused_ordering(957) 00:13:46.104 fused_ordering(958) 00:13:46.104 fused_ordering(959) 00:13:46.104 fused_ordering(960) 00:13:46.104 fused_ordering(961) 00:13:46.104 fused_ordering(962) 00:13:46.104 fused_ordering(963) 00:13:46.104 fused_ordering(964) 00:13:46.104 fused_ordering(965) 00:13:46.104 fused_ordering(966) 00:13:46.104 fused_ordering(967) 00:13:46.104 fused_ordering(968) 00:13:46.104 fused_ordering(969) 00:13:46.104 fused_ordering(970) 00:13:46.104 fused_ordering(971) 00:13:46.104 fused_ordering(972) 00:13:46.104 fused_ordering(973) 00:13:46.104 fused_ordering(974) 00:13:46.104 fused_ordering(975) 00:13:46.104 fused_ordering(976) 00:13:46.104 fused_ordering(977) 00:13:46.104 fused_ordering(978) 00:13:46.104 fused_ordering(979) 00:13:46.104 fused_ordering(980) 00:13:46.104 fused_ordering(981) 00:13:46.104 fused_ordering(982) 00:13:46.104 fused_ordering(983) 00:13:46.104 fused_ordering(984) 00:13:46.104 fused_ordering(985) 00:13:46.104 fused_ordering(986) 00:13:46.104 fused_ordering(987) 00:13:46.104 fused_ordering(988) 00:13:46.104 fused_ordering(989) 00:13:46.104 fused_ordering(990) 00:13:46.104 fused_ordering(991) 00:13:46.104 fused_ordering(992) 00:13:46.104 fused_ordering(993) 00:13:46.104 fused_ordering(994) 00:13:46.104 fused_ordering(995) 00:13:46.104 fused_ordering(996) 00:13:46.104 fused_ordering(997) 00:13:46.104 fused_ordering(998) 00:13:46.104 fused_ordering(999) 00:13:46.104 fused_ordering(1000) 00:13:46.104 fused_ordering(1001) 00:13:46.104 fused_ordering(1002) 00:13:46.104 fused_ordering(1003) 00:13:46.104 fused_ordering(1004) 00:13:46.104 fused_ordering(1005) 00:13:46.104 fused_ordering(1006) 00:13:46.104 fused_ordering(1007) 00:13:46.104 fused_ordering(1008) 00:13:46.104 fused_ordering(1009) 00:13:46.104 fused_ordering(1010) 00:13:46.104 fused_ordering(1011) 00:13:46.105 fused_ordering(1012) 00:13:46.105 fused_ordering(1013) 00:13:46.105 fused_ordering(1014) 00:13:46.105 fused_ordering(1015) 00:13:46.105 fused_ordering(1016) 00:13:46.105 fused_ordering(1017) 00:13:46.105 fused_ordering(1018) 00:13:46.105 fused_ordering(1019) 00:13:46.105 fused_ordering(1020) 00:13:46.105 fused_ordering(1021) 00:13:46.105 fused_ordering(1022) 00:13:46.105 fused_ordering(1023) 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.105 rmmod nvme_tcp 00:13:46.105 rmmod nvme_fabrics 00:13:46.105 rmmod nvme_keyring 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2441469 ']' 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2441469 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2441469 ']' 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2441469 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2441469 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2441469' 00:13:46.105 killing process with pid 2441469 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2441469 00:13:46.105 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2441469 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.364 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.266 00:13:48.266 real 0m8.538s 00:13:48.266 user 0m6.451s 00:13:48.266 sys 0m3.390s 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.266 ************************************ 00:13:48.266 END TEST nvmf_fused_ordering 00:13:48.266 ************************************ 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.266 ************************************ 00:13:48.266 START TEST nvmf_ns_masking 00:13:48.266 ************************************ 00:13:48.266 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:48.525 * Looking for test storage... 00:13:48.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1d97bd68-661e-4d55-84bc-4aef7640c3ab 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=86d4ac7b-45ce-4a21-be4f-94cda300a168 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6edf5f53-2536-4b32-a3c4-aac81f88ec42 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.525 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.526 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.526 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.428 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.428 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:13:50.429 00:13:50.429 --- 10.0.0.2 ping statistics --- 00:13:50.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.429 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:13:50.429 00:13:50.429 --- 10.0.0.1 ping statistics --- 00:13:50.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.429 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2443927 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2443927 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2443927 ']' 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.429 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.688 [2024-07-25 07:19:23.000391] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:13:50.688 [2024-07-25 07:19:23.000487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.688 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.688 [2024-07-25 07:19:23.079460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.688 [2024-07-25 07:19:23.203660] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.688 [2024-07-25 07:19:23.203719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.688 [2024-07-25 07:19:23.203736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.688 [2024-07-25 07:19:23.203750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.688 [2024-07-25 07:19:23.203769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.688 [2024-07-25 07:19:23.203806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.947 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:51.204 [2024-07-25 07:19:23.628483] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.204 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:51.204 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:51.204 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:51.508 Malloc1 00:13:51.508 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:51.769 Malloc2 00:13:51.769 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.027 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:52.285 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.543 [2024-07-25 07:19:24.899645] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.543 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:52.543 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6edf5f53-2536-4b32-a3c4-aac81f88ec42 -a 10.0.0.2 -s 4420 -i 4 00:13:52.801 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.801 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:52.801 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.801 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:52.801 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.702 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.961 [ 0]:0x1 00:13:54.961 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.961 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.961 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae654355acb4984a12e3f5357e08c40 00:13:54.961 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae654355acb4984a12e3f5357e08c40 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.961 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.219 [ 0]:0x1 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae654355acb4984a12e3f5357e08c40 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae654355acb4984a12e3f5357e08c40 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.219 [ 1]:0x2 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:55.219 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.477 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.734 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6edf5f53-2536-4b32-a3c4-aac81f88ec42 -a 10.0.0.2 -s 4420 -i 4 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:55.993 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:57.891 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:58.148 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.148 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:58.148 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:58.148 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.149 [ 0]:0x2 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.149 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.407 [ 0]:0x1 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae654355acb4984a12e3f5357e08c40 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae654355acb4984a12e3f5357e08c40 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.407 [ 1]:0x2 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.407 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.665 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.923 [ 0]:0x2 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.923 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6edf5f53-2536-4b32-a3c4-aac81f88ec42 -a 10.0.0.2 -s 4420 -i 4 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:59.181 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.705 [ 0]:0x1 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fae654355acb4984a12e3f5357e08c40 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fae654355acb4984a12e3f5357e08c40 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.705 [ 1]:0x2 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.705 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.705 [ 0]:0x2 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.705 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:01.963 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.221 [2024-07-25 07:19:34.520707] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:02.221 request: 00:14:02.221 { 00:14:02.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.221 "nsid": 2, 00:14:02.221 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.221 "method": "nvmf_ns_remove_host", 00:14:02.221 "req_id": 1 00:14:02.221 } 00:14:02.221 Got JSON-RPC error response 00:14:02.221 response: 00:14:02.221 { 00:14:02.221 "code": -32602, 00:14:02.221 "message": "Invalid parameters" 00:14:02.221 } 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.221 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.222 [ 0]:0x2 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cedb7d5a4ba49e09599e99e9f7098c2 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cedb7d5a4ba49e09599e99e9f7098c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2445420 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2445420 /var/tmp/host.sock 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2445420 ']' 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.222 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.222 [2024-07-25 07:19:34.733603] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:02.222 [2024-07-25 07:19:34.733703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445420 ] 00:14:02.480 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.480 [2024-07-25 07:19:34.797615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.480 [2024-07-25 07:19:34.915040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.738 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.738 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:02.738 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.996 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.254 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1d97bd68-661e-4d55-84bc-4aef7640c3ab 00:14:03.254 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:03.254 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1D97BD68661E4D5584BC4AEF7640C3AB -i 00:14:03.819 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 86d4ac7b-45ce-4a21-be4f-94cda300a168 00:14:03.819 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:03.819 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 86D4AC7B45CE4A21BE4F94CDA300A168 -i 00:14:03.819 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.077 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:04.335 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.335 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.901 nvme0n1 00:14:04.901 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.901 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:05.158 nvme1n2 00:14:05.158 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:05.158 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:05.158 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:05.158 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:05.158 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:05.416 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:05.416 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:05.416 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:05.416 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:05.674 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1d97bd68-661e-4d55-84bc-4aef7640c3ab == \1\d\9\7\b\d\6\8\-\6\6\1\e\-\4\d\5\5\-\8\4\b\c\-\4\a\e\f\7\6\4\0\c\3\a\b ]] 00:14:05.674 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:05.674 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:05.674 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 86d4ac7b-45ce-4a21-be4f-94cda300a168 == \8\6\d\4\a\c\7\b\-\4\5\c\e\-\4\a\2\1\-\b\e\4\f\-\9\4\c\d\a\3\0\0\a\1\6\8 ]] 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2445420 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2445420 ']' 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2445420 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2445420 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2445420' 00:14:05.932 killing process with pid 2445420 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2445420 00:14:05.932 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2445420 00:14:06.521 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.521 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.521 rmmod nvme_tcp 00:14:06.782 rmmod nvme_fabrics 00:14:06.782 rmmod nvme_keyring 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2443927 ']' 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2443927 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2443927 ']' 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2443927 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2443927 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2443927' 00:14:06.783 killing process with pid 2443927 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2443927 00:14:06.783 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2443927 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.041 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.576 00:14:09.576 real 0m20.706s 00:14:09.576 user 0m27.227s 00:14:09.576 sys 0m4.032s 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.576 ************************************ 00:14:09.576 END TEST nvmf_ns_masking 00:14:09.576 ************************************ 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.576 ************************************ 00:14:09.576 START TEST nvmf_nvme_cli 00:14:09.576 ************************************ 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:09.576 * Looking for test storage... 00:14:09.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.576 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.577 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.577 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.577 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.577 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.577 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:11.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:11.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.489 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:11.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:11.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:14:11.490 00:14:11.490 --- 10.0.0.2 ping statistics --- 00:14:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.490 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:14:11.490 00:14:11.490 --- 10.0.0.1 ping statistics --- 00:14:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.490 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2447910 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2447910 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2447910 ']' 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.490 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.490 [2024-07-25 07:19:43.882991] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:11.490 [2024-07-25 07:19:43.883085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.490 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.490 [2024-07-25 07:19:43.962502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.748 [2024-07-25 07:19:44.090327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.748 [2024-07-25 07:19:44.090392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.748 [2024-07-25 07:19:44.090409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.748 [2024-07-25 07:19:44.090422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.748 [2024-07-25 07:19:44.090434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.748 [2024-07-25 07:19:44.090505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.748 [2024-07-25 07:19:44.090588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.748 [2024-07-25 07:19:44.094265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.748 [2024-07-25 07:19:44.094279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.748 [2024-07-25 07:19:44.253854] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.748 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.749 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 Malloc0 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 Malloc1 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 [2024-07-25 07:19:44.339773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:12.007 00:14:12.007 Discovery Log Number of Records 2, Generation counter 2 00:14:12.007 =====Discovery Log Entry 0====== 00:14:12.007 trtype: tcp 00:14:12.007 adrfam: ipv4 00:14:12.007 subtype: current discovery subsystem 00:14:12.007 treq: not required 00:14:12.007 portid: 0 00:14:12.007 trsvcid: 4420 00:14:12.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:12.007 traddr: 10.0.0.2 00:14:12.007 eflags: explicit discovery connections, duplicate discovery information 00:14:12.007 sectype: none 00:14:12.007 =====Discovery Log Entry 1====== 00:14:12.007 trtype: tcp 00:14:12.007 adrfam: ipv4 00:14:12.007 subtype: nvme subsystem 00:14:12.007 treq: not required 00:14:12.007 portid: 0 00:14:12.007 trsvcid: 4420 00:14:12.007 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:12.007 traddr: 10.0.0.2 00:14:12.007 eflags: none 00:14:12.007 sectype: none 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:12.007 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.573 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:12.573 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:12.573 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.573 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:12.573 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:12.573 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.100 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:15.101 /dev/nvme0n1 ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.101 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.101 rmmod nvme_tcp 00:14:15.101 rmmod nvme_fabrics 00:14:15.359 rmmod nvme_keyring 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2447910 ']' 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2447910 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2447910 ']' 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2447910 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2447910 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2447910' 00:14:15.359 killing process with pid 2447910 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2447910 00:14:15.359 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2447910 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.617 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.149 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.149 00:14:18.149 real 0m8.528s 00:14:18.149 user 0m16.056s 00:14:18.149 sys 0m2.241s 00:14:18.149 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.149 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.149 ************************************ 00:14:18.149 END TEST nvmf_nvme_cli 00:14:18.149 ************************************ 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.150 ************************************ 00:14:18.150 START TEST nvmf_vfio_user 00:14:18.150 ************************************ 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:18.150 * Looking for test storage... 00:14:18.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2448839 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2448839' 00:14:18.150 Process pid: 2448839 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2448839 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2448839 ']' 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:18.150 [2024-07-25 07:19:50.232214] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:18.150 [2024-07-25 07:19:50.232342] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.150 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.150 [2024-07-25 07:19:50.293107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.150 [2024-07-25 07:19:50.407456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.150 [2024-07-25 07:19:50.407523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.150 [2024-07-25 07:19:50.407538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.150 [2024-07-25 07:19:50.407550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.150 [2024-07-25 07:19:50.407560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.150 [2024-07-25 07:19:50.407620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.150 [2024-07-25 07:19:50.407646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.150 [2024-07-25 07:19:50.407703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.150 [2024-07-25 07:19:50.407707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:18.150 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:19.083 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:19.341 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:19.341 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:19.341 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:19.341 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:19.341 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:19.599 Malloc1 00:14:19.599 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:19.857 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:20.114 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:20.372 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:20.372 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:20.372 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:20.629 Malloc2 00:14:20.629 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:20.887 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:21.145 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:21.404 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:21.404 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:21.404 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:21.404 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:21.404 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:21.404 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:21.404 [2024-07-25 07:19:53.833312] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:21.404 [2024-07-25 07:19:53.833350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449261 ] 00:14:21.404 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.404 [2024-07-25 07:19:53.867632] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:21.404 [2024-07-25 07:19:53.873691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:21.404 [2024-07-25 07:19:53.873719] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5e21fb5000 00:14:21.404 [2024-07-25 07:19:53.874683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.875680] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.876689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.877689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.878692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.879698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.880708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.881708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.404 [2024-07-25 07:19:53.882718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:21.404 [2024-07-25 07:19:53.882737] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5e21faa000 00:14:21.404 [2024-07-25 07:19:53.883854] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:21.404 [2024-07-25 07:19:53.897843] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:21.404 [2024-07-25 07:19:53.897882] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:21.404 [2024-07-25 07:19:53.902846] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:21.404 [2024-07-25 07:19:53.902907] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:21.404 [2024-07-25 07:19:53.903024] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:21.404 [2024-07-25 07:19:53.903063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:21.404 [2024-07-25 07:19:53.903075] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:21.404 [2024-07-25 07:19:53.905252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:21.404 [2024-07-25 07:19:53.905280] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:21.404 [2024-07-25 07:19:53.905294] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:21.404 [2024-07-25 07:19:53.905842] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:21.404 [2024-07-25 07:19:53.905860] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:21.404 [2024-07-25 07:19:53.905874] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:21.404 [2024-07-25 07:19:53.906848] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:21.404 [2024-07-25 07:19:53.906868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:21.404 [2024-07-25 07:19:53.907856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:21.404 [2024-07-25 07:19:53.907875] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:21.404 [2024-07-25 07:19:53.907884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:21.404 [2024-07-25 07:19:53.907896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:21.404 [2024-07-25 07:19:53.908006] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:21.404 [2024-07-25 07:19:53.908015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:21.404 [2024-07-25 07:19:53.908023] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:21.404 [2024-07-25 07:19:53.908880] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:21.404 [2024-07-25 07:19:53.909870] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:21.404 [2024-07-25 07:19:53.910876] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:21.404 [2024-07-25 07:19:53.911870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.404 [2024-07-25 07:19:53.911985] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:21.404 [2024-07-25 07:19:53.912888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:21.404 [2024-07-25 07:19:53.912907] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:21.404 [2024-07-25 07:19:53.912916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:21.404 [2024-07-25 07:19:53.912945] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:21.404 [2024-07-25 07:19:53.912959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:21.404 [2024-07-25 07:19:53.912988] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.404 [2024-07-25 07:19:53.912999] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.404 [2024-07-25 07:19:53.913006] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.405 [2024-07-25 07:19:53.913028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913115] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:21.405 [2024-07-25 07:19:53.913123] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:21.405 [2024-07-25 07:19:53.913130] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:21.405 [2024-07-25 07:19:53.913138] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:21.405 [2024-07-25 07:19:53.913146] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:21.405 [2024-07-25 07:19:53.913154] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:21.405 [2024-07-25 07:19:53.913162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.405 [2024-07-25 07:19:53.913273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.405 [2024-07-25 07:19:53.913287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.405 [2024-07-25 07:19:53.913299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.405 [2024-07-25 07:19:53.913308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913371] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:21.405 [2024-07-25 07:19:53.913380] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913396] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913536] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:21.405 [2024-07-25 07:19:53.913545] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:21.405 [2024-07-25 07:19:53.913551] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.405 [2024-07-25 07:19:53.913561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913612] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:21.405 [2024-07-25 07:19:53.913628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913656] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.405 [2024-07-25 07:19:53.913664] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.405 [2024-07-25 07:19:53.913670] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.405 [2024-07-25 07:19:53.913679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913759] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.405 [2024-07-25 07:19:53.913767] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.405 [2024-07-25 07:19:53.913773] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.405 [2024-07-25 07:19:53.913782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913872] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913881] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:21.405 [2024-07-25 07:19:53.913889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:21.405 [2024-07-25 07:19:53.913898] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:21.405 [2024-07-25 07:19:53.913928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.913978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.913994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.914008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.914024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.914036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:21.405 [2024-07-25 07:19:53.914059] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:21.405 [2024-07-25 07:19:53.914069] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:21.405 [2024-07-25 07:19:53.914076] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:21.405 [2024-07-25 07:19:53.914082] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:21.405 [2024-07-25 07:19:53.914088] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:21.405 [2024-07-25 07:19:53.914097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:21.405 [2024-07-25 07:19:53.914109] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:21.405 [2024-07-25 07:19:53.914117] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:21.405 [2024-07-25 07:19:53.914127] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.405 [2024-07-25 07:19:53.914136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.914148] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:21.405 [2024-07-25 07:19:53.914156] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.405 [2024-07-25 07:19:53.914162] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.405 [2024-07-25 07:19:53.914171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.405 [2024-07-25 07:19:53.914183] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:21.405 [2024-07-25 07:19:53.914192] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:21.405 [2024-07-25 07:19:53.914197] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.406 [2024-07-25 07:19:53.914206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:21.406 [2024-07-25 07:19:53.914217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:21.406 [2024-07-25 07:19:53.914259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:21.406 [2024-07-25 07:19:53.914282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:21.406 [2024-07-25 07:19:53.914295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:21.406 ===================================================== 00:14:21.406 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.406 ===================================================== 00:14:21.406 Controller Capabilities/Features 00:14:21.406 ================================ 00:14:21.406 Vendor ID: 4e58 00:14:21.406 Subsystem Vendor ID: 4e58 00:14:21.406 Serial Number: SPDK1 00:14:21.406 Model Number: SPDK bdev Controller 00:14:21.406 Firmware Version: 24.09 00:14:21.406 Recommended Arb Burst: 6 00:14:21.406 IEEE OUI Identifier: 8d 6b 50 00:14:21.406 Multi-path I/O 00:14:21.406 May have multiple subsystem ports: Yes 00:14:21.406 May have multiple controllers: Yes 00:14:21.406 Associated with SR-IOV VF: No 00:14:21.406 Max Data Transfer Size: 131072 00:14:21.406 Max Number of Namespaces: 32 00:14:21.406 Max Number of I/O Queues: 127 00:14:21.406 NVMe Specification Version (VS): 1.3 00:14:21.406 NVMe Specification Version (Identify): 1.3 00:14:21.406 Maximum Queue Entries: 256 00:14:21.406 Contiguous Queues Required: Yes 00:14:21.406 Arbitration Mechanisms Supported 00:14:21.406 Weighted Round Robin: Not Supported 00:14:21.406 Vendor Specific: Not Supported 00:14:21.406 Reset Timeout: 15000 ms 00:14:21.406 Doorbell Stride: 4 bytes 00:14:21.406 NVM Subsystem Reset: Not Supported 00:14:21.406 Command Sets Supported 00:14:21.406 NVM Command Set: Supported 00:14:21.406 Boot Partition: Not Supported 00:14:21.406 Memory Page Size Minimum: 4096 bytes 00:14:21.406 Memory Page Size Maximum: 4096 bytes 00:14:21.406 Persistent Memory Region: Not Supported 00:14:21.406 Optional Asynchronous Events Supported 00:14:21.406 Namespace Attribute Notices: Supported 00:14:21.406 Firmware Activation Notices: Not Supported 00:14:21.406 ANA Change Notices: Not Supported 00:14:21.406 PLE Aggregate Log Change Notices: Not Supported 00:14:21.406 LBA Status Info Alert Notices: Not Supported 00:14:21.406 EGE Aggregate Log Change Notices: Not Supported 00:14:21.406 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.406 Zone Descriptor Change Notices: Not Supported 00:14:21.406 Discovery Log Change Notices: Not Supported 00:14:21.406 Controller Attributes 00:14:21.406 128-bit Host Identifier: Supported 00:14:21.406 Non-Operational Permissive Mode: Not Supported 00:14:21.406 NVM Sets: Not Supported 00:14:21.406 Read Recovery Levels: Not Supported 00:14:21.406 Endurance Groups: Not Supported 00:14:21.406 Predictable Latency Mode: Not Supported 00:14:21.406 Traffic Based Keep ALive: Not Supported 00:14:21.406 Namespace Granularity: Not Supported 00:14:21.406 SQ Associations: Not Supported 00:14:21.406 UUID List: Not Supported 00:14:21.406 Multi-Domain Subsystem: Not Supported 00:14:21.406 Fixed Capacity Management: Not Supported 00:14:21.406 Variable Capacity Management: Not Supported 00:14:21.406 Delete Endurance Group: Not Supported 00:14:21.406 Delete NVM Set: Not Supported 00:14:21.406 Extended LBA Formats Supported: Not Supported 00:14:21.406 Flexible Data Placement Supported: Not Supported 00:14:21.406 00:14:21.406 Controller Memory Buffer Support 00:14:21.406 ================================ 00:14:21.406 Supported: No 00:14:21.406 00:14:21.406 Persistent Memory Region Support 00:14:21.406 ================================ 00:14:21.406 Supported: No 00:14:21.406 00:14:21.406 Admin Command Set Attributes 00:14:21.406 ============================ 00:14:21.406 Security Send/Receive: Not Supported 00:14:21.406 Format NVM: Not Supported 00:14:21.406 Firmware Activate/Download: Not Supported 00:14:21.406 Namespace Management: Not Supported 00:14:21.406 Device Self-Test: Not Supported 00:14:21.406 Directives: Not Supported 00:14:21.406 NVMe-MI: Not Supported 00:14:21.406 Virtualization Management: Not Supported 00:14:21.406 Doorbell Buffer Config: Not Supported 00:14:21.406 Get LBA Status Capability: Not Supported 00:14:21.406 Command & Feature Lockdown Capability: Not Supported 00:14:21.406 Abort Command Limit: 4 00:14:21.406 Async Event Request Limit: 4 00:14:21.406 Number of Firmware Slots: N/A 00:14:21.406 Firmware Slot 1 Read-Only: N/A 00:14:21.406 Firmware Activation Without Reset: N/A 00:14:21.406 Multiple Update Detection Support: N/A 00:14:21.406 Firmware Update Granularity: No Information Provided 00:14:21.406 Per-Namespace SMART Log: No 00:14:21.406 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.406 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:21.406 Command Effects Log Page: Supported 00:14:21.406 Get Log Page Extended Data: Supported 00:14:21.406 Telemetry Log Pages: Not Supported 00:14:21.406 Persistent Event Log Pages: Not Supported 00:14:21.406 Supported Log Pages Log Page: May Support 00:14:21.406 Commands Supported & Effects Log Page: Not Supported 00:14:21.406 Feature Identifiers & Effects Log Page:May Support 00:14:21.406 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.406 Data Area 4 for Telemetry Log: Not Supported 00:14:21.406 Error Log Page Entries Supported: 128 00:14:21.406 Keep Alive: Supported 00:14:21.406 Keep Alive Granularity: 10000 ms 00:14:21.406 00:14:21.406 NVM Command Set Attributes 00:14:21.406 ========================== 00:14:21.406 Submission Queue Entry Size 00:14:21.406 Max: 64 00:14:21.406 Min: 64 00:14:21.406 Completion Queue Entry Size 00:14:21.406 Max: 16 00:14:21.406 Min: 16 00:14:21.406 Number of Namespaces: 32 00:14:21.406 Compare Command: Supported 00:14:21.406 Write Uncorrectable Command: Not Supported 00:14:21.406 Dataset Management Command: Supported 00:14:21.406 Write Zeroes Command: Supported 00:14:21.406 Set Features Save Field: Not Supported 00:14:21.406 Reservations: Not Supported 00:14:21.406 Timestamp: Not Supported 00:14:21.406 Copy: Supported 00:14:21.406 Volatile Write Cache: Present 00:14:21.406 Atomic Write Unit (Normal): 1 00:14:21.406 Atomic Write Unit (PFail): 1 00:14:21.406 Atomic Compare & Write Unit: 1 00:14:21.406 Fused Compare & Write: Supported 00:14:21.406 Scatter-Gather List 00:14:21.406 SGL Command Set: Supported (Dword aligned) 00:14:21.406 SGL Keyed: Not Supported 00:14:21.406 SGL Bit Bucket Descriptor: Not Supported 00:14:21.406 SGL Metadata Pointer: Not Supported 00:14:21.406 Oversized SGL: Not Supported 00:14:21.406 SGL Metadata Address: Not Supported 00:14:21.406 SGL Offset: Not Supported 00:14:21.406 Transport SGL Data Block: Not Supported 00:14:21.406 Replay Protected Memory Block: Not Supported 00:14:21.406 00:14:21.406 Firmware Slot Information 00:14:21.406 ========================= 00:14:21.406 Active slot: 1 00:14:21.406 Slot 1 Firmware Revision: 24.09 00:14:21.406 00:14:21.406 00:14:21.406 Commands Supported and Effects 00:14:21.406 ============================== 00:14:21.406 Admin Commands 00:14:21.406 -------------- 00:14:21.406 Get Log Page (02h): Supported 00:14:21.406 Identify (06h): Supported 00:14:21.406 Abort (08h): Supported 00:14:21.406 Set Features (09h): Supported 00:14:21.406 Get Features (0Ah): Supported 00:14:21.406 Asynchronous Event Request (0Ch): Supported 00:14:21.406 Keep Alive (18h): Supported 00:14:21.406 I/O Commands 00:14:21.406 ------------ 00:14:21.406 Flush (00h): Supported LBA-Change 00:14:21.406 Write (01h): Supported LBA-Change 00:14:21.406 Read (02h): Supported 00:14:21.406 Compare (05h): Supported 00:14:21.406 Write Zeroes (08h): Supported LBA-Change 00:14:21.406 Dataset Management (09h): Supported LBA-Change 00:14:21.406 Copy (19h): Supported LBA-Change 00:14:21.406 00:14:21.406 Error Log 00:14:21.406 ========= 00:14:21.406 00:14:21.406 Arbitration 00:14:21.406 =========== 00:14:21.406 Arbitration Burst: 1 00:14:21.406 00:14:21.406 Power Management 00:14:21.406 ================ 00:14:21.406 Number of Power States: 1 00:14:21.406 Current Power State: Power State #0 00:14:21.406 Power State #0: 00:14:21.406 Max Power: 0.00 W 00:14:21.406 Non-Operational State: Operational 00:14:21.406 Entry Latency: Not Reported 00:14:21.407 Exit Latency: Not Reported 00:14:21.407 Relative Read Throughput: 0 00:14:21.407 Relative Read Latency: 0 00:14:21.407 Relative Write Throughput: 0 00:14:21.407 Relative Write Latency: 0 00:14:21.407 Idle Power: Not Reported 00:14:21.407 Active Power: Not Reported 00:14:21.407 Non-Operational Permissive Mode: Not Supported 00:14:21.407 00:14:21.407 Health Information 00:14:21.407 ================== 00:14:21.407 Critical Warnings: 00:14:21.407 Available Spare Space: OK 00:14:21.407 Temperature: OK 00:14:21.407 Device Reliability: OK 00:14:21.407 Read Only: No 00:14:21.407 Volatile Memory Backup: OK 00:14:21.407 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:21.407 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:21.407 Available Spare: 0% 00:14:21.407 Available Sp[2024-07-25 07:19:53.914426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:21.407 [2024-07-25 07:19:53.914444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:21.407 [2024-07-25 07:19:53.914492] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:21.407 [2024-07-25 07:19:53.914512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.407 [2024-07-25 07:19:53.914524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.407 [2024-07-25 07:19:53.914549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.407 [2024-07-25 07:19:53.914559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.407 [2024-07-25 07:19:53.917253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:21.407 [2024-07-25 07:19:53.917276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:21.407 [2024-07-25 07:19:53.917914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.407 [2024-07-25 07:19:53.917986] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:21.407 [2024-07-25 07:19:53.918000] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:21.407 [2024-07-25 07:19:53.918926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:21.407 [2024-07-25 07:19:53.918954] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:21.407 [2024-07-25 07:19:53.919014] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:21.407 [2024-07-25 07:19:53.920965] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:21.665 are Threshold: 0% 00:14:21.665 Life Percentage Used: 0% 00:14:21.665 Data Units Read: 0 00:14:21.665 Data Units Written: 0 00:14:21.665 Host Read Commands: 0 00:14:21.665 Host Write Commands: 0 00:14:21.665 Controller Busy Time: 0 minutes 00:14:21.665 Power Cycles: 0 00:14:21.665 Power On Hours: 0 hours 00:14:21.665 Unsafe Shutdowns: 0 00:14:21.665 Unrecoverable Media Errors: 0 00:14:21.665 Lifetime Error Log Entries: 0 00:14:21.665 Warning Temperature Time: 0 minutes 00:14:21.665 Critical Temperature Time: 0 minutes 00:14:21.665 00:14:21.665 Number of Queues 00:14:21.665 ================ 00:14:21.665 Number of I/O Submission Queues: 127 00:14:21.665 Number of I/O Completion Queues: 127 00:14:21.665 00:14:21.665 Active Namespaces 00:14:21.665 ================= 00:14:21.665 Namespace ID:1 00:14:21.665 Error Recovery Timeout: Unlimited 00:14:21.665 Command Set Identifier: NVM (00h) 00:14:21.666 Deallocate: Supported 00:14:21.666 Deallocated/Unwritten Error: Not Supported 00:14:21.666 Deallocated Read Value: Unknown 00:14:21.666 Deallocate in Write Zeroes: Not Supported 00:14:21.666 Deallocated Guard Field: 0xFFFF 00:14:21.666 Flush: Supported 00:14:21.666 Reservation: Supported 00:14:21.666 Namespace Sharing Capabilities: Multiple Controllers 00:14:21.666 Size (in LBAs): 131072 (0GiB) 00:14:21.666 Capacity (in LBAs): 131072 (0GiB) 00:14:21.666 Utilization (in LBAs): 131072 (0GiB) 00:14:21.666 NGUID: 378EF4ECA9644B49BE465301131EAD1A 00:14:21.666 UUID: 378ef4ec-a964-4b49-be46-5301131ead1a 00:14:21.666 Thin Provisioning: Not Supported 00:14:21.666 Per-NS Atomic Units: Yes 00:14:21.666 Atomic Boundary Size (Normal): 0 00:14:21.666 Atomic Boundary Size (PFail): 0 00:14:21.666 Atomic Boundary Offset: 0 00:14:21.666 Maximum Single Source Range Length: 65535 00:14:21.666 Maximum Copy Length: 65535 00:14:21.666 Maximum Source Range Count: 1 00:14:21.666 NGUID/EUI64 Never Reused: No 00:14:21.666 Namespace Write Protected: No 00:14:21.666 Number of LBA Formats: 1 00:14:21.666 Current LBA Format: LBA Format #00 00:14:21.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.666 00:14:21.666 07:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:21.666 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.666 [2024-07-25 07:19:54.151107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.956 Initializing NVMe Controllers 00:14:26.956 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:26.956 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:26.956 Initialization complete. Launching workers. 00:14:26.956 ======================================================== 00:14:26.956 Latency(us) 00:14:26.956 Device Information : IOPS MiB/s Average min max 00:14:26.956 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34007.79 132.84 3764.66 1173.53 7630.55 00:14:26.956 ======================================================== 00:14:26.956 Total : 34007.79 132.84 3764.66 1173.53 7630.55 00:14:26.956 00:14:26.956 [2024-07-25 07:19:59.173879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.956 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:26.956 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.956 [2024-07-25 07:19:59.414052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.215 Initializing NVMe Controllers 00:14:32.215 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:32.215 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:32.215 Initialization complete. Launching workers. 00:14:32.215 ======================================================== 00:14:32.215 Latency(us) 00:14:32.215 Device Information : IOPS MiB/s Average min max 00:14:32.215 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.38 62.40 8021.23 4959.44 15990.19 00:14:32.215 ======================================================== 00:14:32.215 Total : 15974.38 62.40 8021.23 4959.44 15990.19 00:14:32.215 00:14:32.215 [2024-07-25 07:20:04.446394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.215 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:32.215 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.215 [2024-07-25 07:20:04.670515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.476 [2024-07-25 07:20:09.768804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.476 Initializing NVMe Controllers 00:14:37.476 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.476 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.476 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:37.476 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:37.476 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:37.476 Initialization complete. Launching workers. 00:14:37.476 Starting thread on core 2 00:14:37.476 Starting thread on core 3 00:14:37.476 Starting thread on core 1 00:14:37.476 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:37.476 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.733 [2024-07-25 07:20:10.073837] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.024 [2024-07-25 07:20:13.334879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.024 Initializing NVMe Controllers 00:14:41.024 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.024 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.024 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:41.024 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:41.024 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:41.024 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:41.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:41.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:41.024 Initialization complete. Launching workers. 00:14:41.024 Starting thread on core 1 with urgent priority queue 00:14:41.024 Starting thread on core 2 with urgent priority queue 00:14:41.024 Starting thread on core 0 with urgent priority queue 00:14:41.024 Starting thread on core 3 with urgent priority queue 00:14:41.024 SPDK bdev Controller (SPDK1 ) core 0: 4798.33 IO/s 20.84 secs/100000 ios 00:14:41.024 SPDK bdev Controller (SPDK1 ) core 1: 4808.33 IO/s 20.80 secs/100000 ios 00:14:41.024 SPDK bdev Controller (SPDK1 ) core 2: 4499.67 IO/s 22.22 secs/100000 ios 00:14:41.024 SPDK bdev Controller (SPDK1 ) core 3: 4096.33 IO/s 24.41 secs/100000 ios 00:14:41.024 ======================================================== 00:14:41.024 00:14:41.024 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:41.024 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.287 [2024-07-25 07:20:13.638965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.287 Initializing NVMe Controllers 00:14:41.287 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.287 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.287 Namespace ID: 1 size: 0GB 00:14:41.287 Initialization complete. 00:14:41.287 INFO: using host memory buffer for IO 00:14:41.287 Hello world! 00:14:41.287 [2024-07-25 07:20:13.672657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.287 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:41.287 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.544 [2024-07-25 07:20:13.973771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.478 Initializing NVMe Controllers 00:14:42.478 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:42.478 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:42.478 Initialization complete. Launching workers. 00:14:42.478 submit (in ns) avg, min, max = 7375.5, 3493.3, 4013975.6 00:14:42.478 complete (in ns) avg, min, max = 24145.1, 2070.0, 4996193.3 00:14:42.478 00:14:42.478 Submit histogram 00:14:42.478 ================ 00:14:42.478 Range in us Cumulative Count 00:14:42.478 3.484 - 3.508: 0.0077% ( 1) 00:14:42.478 3.508 - 3.532: 0.2711% ( 34) 00:14:42.478 3.532 - 3.556: 1.5958% ( 171) 00:14:42.478 3.556 - 3.579: 4.4620% ( 370) 00:14:42.478 3.579 - 3.603: 10.3106% ( 755) 00:14:42.478 3.603 - 3.627: 19.7072% ( 1213) 00:14:42.478 3.627 - 3.650: 29.4911% ( 1263) 00:14:42.478 3.650 - 3.674: 38.1362% ( 1116) 00:14:42.478 3.674 - 3.698: 45.1158% ( 901) 00:14:42.478 3.698 - 3.721: 51.2898% ( 797) 00:14:42.478 3.721 - 3.745: 55.7983% ( 582) 00:14:42.478 3.745 - 3.769: 59.5399% ( 483) 00:14:42.478 3.769 - 3.793: 63.0103% ( 448) 00:14:42.478 3.793 - 3.816: 66.0857% ( 397) 00:14:42.478 3.816 - 3.840: 69.3392% ( 420) 00:14:42.478 3.840 - 3.864: 73.3209% ( 514) 00:14:42.478 3.864 - 3.887: 77.8527% ( 585) 00:14:42.478 3.887 - 3.911: 81.6872% ( 495) 00:14:42.478 3.911 - 3.935: 84.6464% ( 382) 00:14:42.478 3.935 - 3.959: 86.6915% ( 264) 00:14:42.478 3.959 - 3.982: 88.3879% ( 219) 00:14:42.478 3.982 - 4.006: 89.9682% ( 204) 00:14:42.478 4.006 - 4.030: 91.2387% ( 164) 00:14:42.478 4.030 - 4.053: 92.2070% ( 125) 00:14:42.478 4.053 - 4.077: 93.1056% ( 116) 00:14:42.478 4.077 - 4.101: 94.0119% ( 117) 00:14:42.478 4.101 - 4.124: 94.7943% ( 101) 00:14:42.478 4.124 - 4.148: 95.3056% ( 66) 00:14:42.478 4.148 - 4.172: 95.6542% ( 45) 00:14:42.478 4.172 - 4.196: 95.9253% ( 35) 00:14:42.478 4.196 - 4.219: 96.1577% ( 30) 00:14:42.478 4.219 - 4.243: 96.3901% ( 30) 00:14:42.478 4.243 - 4.267: 96.4443% ( 7) 00:14:42.478 4.267 - 4.290: 96.6070% ( 21) 00:14:42.478 4.290 - 4.314: 96.7232% ( 15) 00:14:42.478 4.314 - 4.338: 96.8084% ( 11) 00:14:42.478 4.338 - 4.361: 96.9246% ( 15) 00:14:42.478 4.361 - 4.385: 96.9789% ( 7) 00:14:42.478 4.385 - 4.409: 97.0021% ( 3) 00:14:42.478 4.409 - 4.433: 97.0641% ( 8) 00:14:42.478 4.433 - 4.456: 97.0950% ( 4) 00:14:42.478 4.456 - 4.480: 97.1028% ( 1) 00:14:42.478 4.480 - 4.504: 97.1260% ( 3) 00:14:42.478 4.504 - 4.527: 97.1415% ( 2) 00:14:42.478 4.599 - 4.622: 97.1493% ( 1) 00:14:42.478 4.646 - 4.670: 97.1648% ( 2) 00:14:42.478 4.670 - 4.693: 97.1725% ( 1) 00:14:42.478 4.693 - 4.717: 97.1803% ( 1) 00:14:42.478 4.717 - 4.741: 97.2190% ( 5) 00:14:42.478 4.741 - 4.764: 97.2577% ( 5) 00:14:42.478 4.764 - 4.788: 97.2810% ( 3) 00:14:42.478 4.788 - 4.812: 97.3352% ( 7) 00:14:42.478 4.812 - 4.836: 97.3584% ( 3) 00:14:42.478 4.836 - 4.859: 97.4127% ( 7) 00:14:42.478 4.859 - 4.883: 97.4436% ( 4) 00:14:42.478 4.883 - 4.907: 97.4979% ( 7) 00:14:42.478 4.907 - 4.930: 97.5366% ( 5) 00:14:42.478 4.930 - 4.954: 97.5831% ( 6) 00:14:42.478 4.954 - 4.978: 97.6296% ( 6) 00:14:42.478 4.978 - 5.001: 97.6760% ( 6) 00:14:42.478 5.001 - 5.025: 97.7380% ( 8) 00:14:42.478 5.025 - 5.049: 97.7613% ( 3) 00:14:42.478 5.049 - 5.073: 97.8000% ( 5) 00:14:42.478 5.073 - 5.096: 97.8155% ( 2) 00:14:42.478 5.096 - 5.120: 97.8387% ( 3) 00:14:42.478 5.120 - 5.144: 97.8774% ( 5) 00:14:42.478 5.144 - 5.167: 97.9007% ( 3) 00:14:42.478 5.167 - 5.191: 97.9239% ( 3) 00:14:42.478 5.191 - 5.215: 97.9549% ( 4) 00:14:42.478 5.215 - 5.239: 97.9627% ( 1) 00:14:42.478 5.239 - 5.262: 97.9782% ( 2) 00:14:42.478 5.262 - 5.286: 97.9859% ( 1) 00:14:42.478 5.310 - 5.333: 97.9936% ( 1) 00:14:42.478 5.428 - 5.452: 98.0014% ( 1) 00:14:42.478 5.499 - 5.523: 98.0091% ( 1) 00:14:42.478 5.547 - 5.570: 98.0169% ( 1) 00:14:42.478 5.570 - 5.594: 98.0246% ( 1) 00:14:42.478 5.594 - 5.618: 98.0401% ( 2) 00:14:42.478 5.618 - 5.641: 98.0556% ( 2) 00:14:42.478 5.689 - 5.713: 98.0634% ( 1) 00:14:42.478 5.713 - 5.736: 98.0711% ( 1) 00:14:42.478 5.736 - 5.760: 98.0789% ( 1) 00:14:42.478 5.831 - 5.855: 98.0866% ( 1) 00:14:42.478 5.879 - 5.902: 98.0944% ( 1) 00:14:42.478 5.997 - 6.021: 98.1021% ( 1) 00:14:42.478 6.068 - 6.116: 98.1098% ( 1) 00:14:42.478 6.210 - 6.258: 98.1176% ( 1) 00:14:42.478 6.353 - 6.400: 98.1331% ( 2) 00:14:42.478 6.495 - 6.542: 98.1408% ( 1) 00:14:42.478 6.542 - 6.590: 98.1563% ( 2) 00:14:42.478 6.637 - 6.684: 98.1641% ( 1) 00:14:42.478 6.684 - 6.732: 98.1718% ( 1) 00:14:42.478 6.779 - 6.827: 98.1873% ( 2) 00:14:42.478 6.827 - 6.874: 98.2028% ( 2) 00:14:42.478 6.921 - 6.969: 98.2106% ( 1) 00:14:42.478 6.969 - 7.016: 98.2260% ( 2) 00:14:42.478 7.064 - 7.111: 98.2338% ( 1) 00:14:42.478 7.111 - 7.159: 98.2570% ( 3) 00:14:42.478 7.159 - 7.206: 98.2648% ( 1) 00:14:42.478 7.206 - 7.253: 98.2725% ( 1) 00:14:42.478 7.253 - 7.301: 98.2880% ( 2) 00:14:42.478 7.301 - 7.348: 98.2958% ( 1) 00:14:42.478 7.348 - 7.396: 98.3035% ( 1) 00:14:42.478 7.396 - 7.443: 98.3267% ( 3) 00:14:42.478 7.443 - 7.490: 98.3345% ( 1) 00:14:42.478 7.585 - 7.633: 98.3422% ( 1) 00:14:42.478 7.727 - 7.775: 98.3500% ( 1) 00:14:42.478 7.775 - 7.822: 98.3655% ( 2) 00:14:42.479 7.822 - 7.870: 98.3732% ( 1) 00:14:42.479 8.059 - 8.107: 98.3965% ( 3) 00:14:42.479 8.107 - 8.154: 98.4120% ( 2) 00:14:42.479 8.201 - 8.249: 98.4275% ( 2) 00:14:42.479 8.296 - 8.344: 98.4507% ( 3) 00:14:42.479 8.344 - 8.391: 98.4662% ( 2) 00:14:42.479 8.391 - 8.439: 98.4817% ( 2) 00:14:42.479 8.439 - 8.486: 98.4894% ( 1) 00:14:42.479 8.533 - 8.581: 98.5049% ( 2) 00:14:42.479 8.581 - 8.628: 98.5127% ( 1) 00:14:42.479 8.818 - 8.865: 98.5437% ( 4) 00:14:42.479 8.913 - 8.960: 98.5514% ( 1) 00:14:42.479 9.007 - 9.055: 98.5591% ( 1) 00:14:42.479 9.102 - 9.150: 98.5669% ( 1) 00:14:42.479 9.481 - 9.529: 98.5746% ( 1) 00:14:42.479 10.003 - 10.050: 98.5901% ( 2) 00:14:42.479 10.050 - 10.098: 98.5979% ( 1) 00:14:42.479 10.335 - 10.382: 98.6056% ( 1) 00:14:42.479 10.524 - 10.572: 98.6134% ( 1) 00:14:42.479 10.667 - 10.714: 98.6211% ( 1) 00:14:42.479 10.761 - 10.809: 98.6289% ( 1) 00:14:42.479 10.999 - 11.046: 98.6366% ( 1) 00:14:42.479 11.141 - 11.188: 98.6444% ( 1) 00:14:42.479 11.236 - 11.283: 98.6521% ( 1) 00:14:42.479 11.283 - 11.330: 98.6598% ( 1) 00:14:42.479 11.330 - 11.378: 98.6676% ( 1) 00:14:42.479 11.710 - 11.757: 98.6753% ( 1) 00:14:42.479 11.899 - 11.947: 98.6831% ( 1) 00:14:42.479 12.326 - 12.421: 98.6908% ( 1) 00:14:42.479 12.421 - 12.516: 98.6986% ( 1) 00:14:42.479 12.516 - 12.610: 98.7063% ( 1) 00:14:42.479 12.610 - 12.705: 98.7141% ( 1) 00:14:42.479 12.705 - 12.800: 98.7218% ( 1) 00:14:42.479 12.895 - 12.990: 98.7296% ( 1) 00:14:42.479 13.274 - 13.369: 98.7373% ( 1) 00:14:42.479 13.369 - 13.464: 98.7451% ( 1) 00:14:42.479 13.464 - 13.559: 98.7606% ( 2) 00:14:42.479 13.559 - 13.653: 98.7683% ( 1) 00:14:42.479 13.843 - 13.938: 98.7760% ( 1) 00:14:42.479 14.127 - 14.222: 98.7915% ( 2) 00:14:42.479 14.412 - 14.507: 98.8070% ( 2) 00:14:42.479 14.601 - 14.696: 98.8225% ( 2) 00:14:42.479 14.696 - 14.791: 98.8303% ( 1) 00:14:42.479 15.455 - 15.550: 98.8380% ( 1) 00:14:42.479 17.067 - 17.161: 98.8613% ( 3) 00:14:42.479 17.161 - 17.256: 98.9000% ( 5) 00:14:42.479 17.256 - 17.351: 98.9232% ( 3) 00:14:42.479 17.351 - 17.446: 98.9387% ( 2) 00:14:42.479 17.446 - 17.541: 98.9852% ( 6) 00:14:42.479 17.541 - 17.636: 99.0317% ( 6) 00:14:42.479 17.636 - 17.730: 99.0782% ( 6) 00:14:42.479 17.730 - 17.825: 99.1246% ( 6) 00:14:42.479 17.825 - 17.920: 99.1711% ( 6) 00:14:42.479 17.920 - 18.015: 99.2408% ( 9) 00:14:42.479 18.015 - 18.110: 99.3183% ( 10) 00:14:42.479 18.110 - 18.204: 99.3725% ( 7) 00:14:42.479 18.204 - 18.299: 99.4345% ( 8) 00:14:42.479 18.299 - 18.394: 99.4732% ( 5) 00:14:42.479 18.394 - 18.489: 99.5662% ( 12) 00:14:42.479 18.489 - 18.584: 99.6359% ( 9) 00:14:42.479 18.584 - 18.679: 99.6746% ( 5) 00:14:42.479 18.679 - 18.773: 99.6979% ( 3) 00:14:42.479 18.773 - 18.868: 99.7366% ( 5) 00:14:42.479 18.868 - 18.963: 99.7599% ( 3) 00:14:42.479 18.963 - 19.058: 99.7754% ( 2) 00:14:42.479 19.058 - 19.153: 99.8063% ( 4) 00:14:42.479 19.153 - 19.247: 99.8218% ( 2) 00:14:42.479 19.247 - 19.342: 99.8296% ( 1) 00:14:42.479 19.532 - 19.627: 99.8451% ( 2) 00:14:42.479 19.627 - 19.721: 99.8528% ( 1) 00:14:42.479 19.816 - 19.911: 99.8606% ( 1) 00:14:42.479 19.911 - 20.006: 99.8683% ( 1) 00:14:42.479 20.006 - 20.101: 99.8761% ( 1) 00:14:42.479 20.859 - 20.954: 99.8838% ( 1) 00:14:42.479 22.850 - 22.945: 99.8915% ( 1) 00:14:42.479 23.135 - 23.230: 99.8993% ( 1) 00:14:42.479 23.230 - 23.324: 99.9070% ( 1) 00:14:42.479 24.652 - 24.841: 99.9148% ( 1) 00:14:42.479 3980.705 - 4004.978: 99.9923% ( 10) 00:14:42.479 4004.978 - 4029.250: 100.0000% ( 1) 00:14:42.479 00:14:42.479 Complete histogram 00:14:42.479 ================== 00:14:42.479 Range in us Cumulative Count 00:14:42.479 2.062 - 2.074: 0.0542% ( 7) 00:14:42.479 2.074 - 2.086: 12.8205% ( 1648) 00:14:42.479 2.086 - 2.098: 31.5749% ( 2421) 00:14:42.479 2.098 - 2.110: 33.8833% ( 298) 00:14:42.479 2.110 - 2.121: 51.2278% ( 2239) 00:14:42.479 2.121 - 2.133: 60.1286% ( 1149) 00:14:42.479 2.133 - 2.145: 62.3828% ( 291) 00:14:42.479 2.145 - 2.157: 68.8202% ( 831) 00:14:42.479 2.157 - 2.169: 72.5773% ( 485) 00:14:42.479 2.169 - 2.181: 73.6928% ( 144) 00:14:42.479 2.181 - 2.193: 79.4717% ( 746) 00:14:42.479 2.193 - 2.204: 82.2527% ( 359) 00:14:42.479 2.204 - 2.216: 83.0196% ( 99) 00:14:42.479 2.216 - 2.228: 85.8006% ( 359) 00:14:42.479 2.228 - 2.240: 88.9070% ( 401) 00:14:42.479 2.240 - 2.252: 90.4640% ( 201) 00:14:42.479 2.252 - 2.264: 92.5788% ( 273) 00:14:42.479 2.264 - 2.276: 93.6323% ( 136) 00:14:42.479 2.276 - 2.287: 93.9345% ( 39) 00:14:42.479 2.287 - 2.299: 94.2056% ( 35) 00:14:42.479 2.299 - 2.311: 94.8021% ( 77) 00:14:42.479 2.311 - 2.323: 95.3288% ( 68) 00:14:42.479 2.323 - 2.335: 95.4295% ( 13) 00:14:42.479 2.335 - 2.347: 95.4915% ( 8) 00:14:42.479 2.347 - 2.359: 95.6774% ( 24) 00:14:42.479 2.359 - 2.370: 95.9873% ( 40) 00:14:42.479 2.370 - 2.382: 96.3049% ( 41) 00:14:42.479 2.382 - 2.394: 96.8317% ( 68) 00:14:42.479 2.394 - 2.406: 97.2422% ( 53) 00:14:42.479 2.406 - 2.418: 97.4746% ( 30) 00:14:42.479 2.418 - 2.430: 97.6451% ( 22) 00:14:42.479 2.430 - 2.441: 97.7225% ( 10) 00:14:42.479 2.441 - 2.453: 97.8387% ( 15) 00:14:42.479 2.453 - 2.465: 97.9394% ( 13) 00:14:42.479 2.465 - 2.477: 98.0401% ( 13) 00:14:42.479 2.477 - 2.489: 98.1021% ( 8) 00:14:42.479 2.489 - 2.501: 98.1951% ( 12) 00:14:42.479 2.501 - 2.513: 98.2260% ( 4) 00:14:42.479 2.513 - 2.524: 98.2570% ( 4) 00:14:42.479 2.524 - 2.536: 98.2725% ( 2) 00:14:42.479 2.536 - 2.548: 98.2803% ( 1) 00:14:42.479 2.560 - 2.572: 98.2958% ( 2) 00:14:42.479 2.596 - 2.607: 98.3035% ( 1) 00:14:42.479 2.619 - 2.631: 98.3113% ( 1) 00:14:42.479 2.631 - 2.643: 98.3267% ( 2) 00:14:42.479 2.655 - 2.667: 98.3345% ( 1) 00:14:42.479 2.667 - 2.679: 98.3422% ( 1) 00:14:42.479 2.679 - 2.690: 98.3500% ( 1) 00:14:42.479 2.702 - 2.714: 98.3655% ( 2) 00:14:42.479 2.726 - 2.738: 98.3732% ( 1) 00:14:42.479 3.200 - 3.224: 98.3810% ( 1) 00:14:42.479 3.224 - 3.247: 98.3887% ( 1) 00:14:42.479 3.247 - 3.271: 98.4042% ( 2) 00:14:42.479 3.271 - 3.295: 98.4120% ( 1) 00:14:42.479 3.295 - 3.319: 98.4275% ( 2) 00:14:42.479 3.319 - 3.342: 98.4352% ( 1) 00:14:42.479 3.342 - 3.366: 98.4429% ( 1) 00:14:42.479 3.390 - 3.413: 98.4584% ( 2) 00:14:42.479 3.413 - 3.437: 98.4739% ( 2) 00:14:42.479 3.437 - 3.461: 98.4972% ( 3) 00:14:42.479 3.461 - 3.484: 98.5127% ( 2) 00:14:42.479 3.484 - 3.508: 98.5591% ( 6) 00:14:42.479 3.508 - 3.532: 98.5746% ( 2) 00:14:42.479 3.532 - 3.556: 9[2024-07-25 07:20:14.994996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.737 8.5901% ( 2) 00:14:42.737 3.603 - 3.627: 98.5979% ( 1) 00:14:42.737 3.698 - 3.721: 98.6056% ( 1) 00:14:42.737 3.721 - 3.745: 98.6211% ( 2) 00:14:42.737 3.745 - 3.769: 98.6289% ( 1) 00:14:42.737 3.769 - 3.793: 98.6366% ( 1) 00:14:42.737 3.840 - 3.864: 98.6521% ( 2) 00:14:42.737 3.911 - 3.935: 98.6598% ( 1) 00:14:42.737 3.935 - 3.959: 98.6676% ( 1) 00:14:42.737 3.959 - 3.982: 98.6753% ( 1) 00:14:42.737 4.219 - 4.243: 98.6831% ( 1) 00:14:42.737 5.404 - 5.428: 98.6908% ( 1) 00:14:42.737 5.570 - 5.594: 98.6986% ( 1) 00:14:42.737 5.641 - 5.665: 98.7063% ( 1) 00:14:42.737 5.713 - 5.736: 98.7141% ( 1) 00:14:42.737 5.784 - 5.807: 98.7218% ( 1) 00:14:42.737 5.807 - 5.831: 98.7296% ( 1) 00:14:42.737 5.855 - 5.879: 98.7373% ( 1) 00:14:42.737 6.021 - 6.044: 98.7451% ( 1) 00:14:42.737 6.068 - 6.116: 98.7683% ( 3) 00:14:42.737 6.210 - 6.258: 98.7760% ( 1) 00:14:42.737 6.353 - 6.400: 98.7838% ( 1) 00:14:42.737 6.400 - 6.447: 98.7915% ( 1) 00:14:42.737 6.495 - 6.542: 98.7993% ( 1) 00:14:42.737 6.684 - 6.732: 98.8148% ( 2) 00:14:42.737 6.827 - 6.874: 98.8225% ( 1) 00:14:42.737 6.921 - 6.969: 98.8303% ( 1) 00:14:42.737 7.443 - 7.490: 98.8380% ( 1) 00:14:42.737 7.490 - 7.538: 98.8535% ( 2) 00:14:42.737 7.680 - 7.727: 98.8613% ( 1) 00:14:42.737 9.055 - 9.102: 98.8690% ( 1) 00:14:42.737 14.981 - 15.076: 98.8768% ( 1) 00:14:42.737 15.550 - 15.644: 98.8845% ( 1) 00:14:42.737 15.644 - 15.739: 98.9000% ( 2) 00:14:42.737 15.739 - 15.834: 98.9155% ( 2) 00:14:42.737 15.834 - 15.929: 98.9232% ( 1) 00:14:42.737 15.929 - 16.024: 98.9542% ( 4) 00:14:42.737 16.024 - 16.119: 99.0084% ( 7) 00:14:42.737 16.119 - 16.213: 99.0239% ( 2) 00:14:42.737 16.213 - 16.308: 99.0472% ( 3) 00:14:42.737 16.308 - 16.403: 99.0937% ( 6) 00:14:42.737 16.403 - 16.498: 99.1556% ( 8) 00:14:42.737 16.498 - 16.593: 99.2253% ( 9) 00:14:42.737 16.593 - 16.687: 99.3106% ( 11) 00:14:42.737 16.687 - 16.782: 99.3261% ( 2) 00:14:42.737 16.782 - 16.877: 99.3493% ( 3) 00:14:42.737 16.972 - 17.067: 99.3570% ( 1) 00:14:42.737 17.067 - 17.161: 99.3880% ( 4) 00:14:42.737 17.161 - 17.256: 99.4035% ( 2) 00:14:42.737 17.256 - 17.351: 99.4113% ( 1) 00:14:42.737 17.351 - 17.446: 99.4190% ( 1) 00:14:42.737 17.730 - 17.825: 99.4268% ( 1) 00:14:42.737 17.825 - 17.920: 99.4422% ( 2) 00:14:42.737 17.920 - 18.015: 99.4500% ( 1) 00:14:42.737 3021.938 - 3034.074: 99.4577% ( 1) 00:14:42.737 3131.164 - 3155.437: 99.4655% ( 1) 00:14:42.737 3592.344 - 3616.616: 99.4732% ( 1) 00:14:42.737 3980.705 - 4004.978: 99.9070% ( 56) 00:14:42.737 4004.978 - 4029.250: 99.9923% ( 11) 00:14:42.737 4975.881 - 5000.154: 100.0000% ( 1) 00:14:42.737 00:14:42.737 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:42.737 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:42.737 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:42.737 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:42.737 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:42.995 [ 00:14:42.995 { 00:14:42.995 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:42.995 "subtype": "Discovery", 00:14:42.995 "listen_addresses": [], 00:14:42.995 "allow_any_host": true, 00:14:42.995 "hosts": [] 00:14:42.995 }, 00:14:42.995 { 00:14:42.995 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:42.995 "subtype": "NVMe", 00:14:42.995 "listen_addresses": [ 00:14:42.995 { 00:14:42.995 "trtype": "VFIOUSER", 00:14:42.995 "adrfam": "IPv4", 00:14:42.995 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:42.995 "trsvcid": "0" 00:14:42.995 } 00:14:42.995 ], 00:14:42.995 "allow_any_host": true, 00:14:42.995 "hosts": [], 00:14:42.996 "serial_number": "SPDK1", 00:14:42.996 "model_number": "SPDK bdev Controller", 00:14:42.996 "max_namespaces": 32, 00:14:42.996 "min_cntlid": 1, 00:14:42.996 "max_cntlid": 65519, 00:14:42.996 "namespaces": [ 00:14:42.996 { 00:14:42.996 "nsid": 1, 00:14:42.996 "bdev_name": "Malloc1", 00:14:42.996 "name": "Malloc1", 00:14:42.996 "nguid": "378EF4ECA9644B49BE465301131EAD1A", 00:14:42.996 "uuid": "378ef4ec-a964-4b49-be46-5301131ead1a" 00:14:42.996 } 00:14:42.996 ] 00:14:42.996 }, 00:14:42.996 { 00:14:42.996 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:42.996 "subtype": "NVMe", 00:14:42.996 "listen_addresses": [ 00:14:42.996 { 00:14:42.996 "trtype": "VFIOUSER", 00:14:42.996 "adrfam": "IPv4", 00:14:42.996 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:42.996 "trsvcid": "0" 00:14:42.996 } 00:14:42.996 ], 00:14:42.996 "allow_any_host": true, 00:14:42.996 "hosts": [], 00:14:42.996 "serial_number": "SPDK2", 00:14:42.996 "model_number": "SPDK bdev Controller", 00:14:42.996 "max_namespaces": 32, 00:14:42.996 "min_cntlid": 1, 00:14:42.996 "max_cntlid": 65519, 00:14:42.996 "namespaces": [ 00:14:42.996 { 00:14:42.996 "nsid": 1, 00:14:42.996 "bdev_name": "Malloc2", 00:14:42.996 "name": "Malloc2", 00:14:42.996 "nguid": "112EFFC6AD634181A559518DDE4B26C3", 00:14:42.996 "uuid": "112effc6-ad63-4181-a559-518dde4b26c3" 00:14:42.996 } 00:14:42.996 ] 00:14:42.996 } 00:14:42.996 ] 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2451782 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:42.996 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:42.996 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.996 [2024-07-25 07:20:15.446797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.254 Malloc3 00:14:43.254 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:43.511 [2024-07-25 07:20:15.802307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.511 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:43.511 Asynchronous Event Request test 00:14:43.511 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.511 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.511 Registering asynchronous event callbacks... 00:14:43.511 Starting namespace attribute notice tests for all controllers... 00:14:43.511 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:43.511 aer_cb - Changed Namespace 00:14:43.511 Cleaning up... 00:14:43.770 [ 00:14:43.770 { 00:14:43.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:43.770 "subtype": "Discovery", 00:14:43.770 "listen_addresses": [], 00:14:43.770 "allow_any_host": true, 00:14:43.770 "hosts": [] 00:14:43.770 }, 00:14:43.770 { 00:14:43.770 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:43.770 "subtype": "NVMe", 00:14:43.770 "listen_addresses": [ 00:14:43.770 { 00:14:43.770 "trtype": "VFIOUSER", 00:14:43.770 "adrfam": "IPv4", 00:14:43.770 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:43.770 "trsvcid": "0" 00:14:43.770 } 00:14:43.770 ], 00:14:43.770 "allow_any_host": true, 00:14:43.770 "hosts": [], 00:14:43.770 "serial_number": "SPDK1", 00:14:43.770 "model_number": "SPDK bdev Controller", 00:14:43.770 "max_namespaces": 32, 00:14:43.770 "min_cntlid": 1, 00:14:43.770 "max_cntlid": 65519, 00:14:43.770 "namespaces": [ 00:14:43.770 { 00:14:43.770 "nsid": 1, 00:14:43.770 "bdev_name": "Malloc1", 00:14:43.770 "name": "Malloc1", 00:14:43.770 "nguid": "378EF4ECA9644B49BE465301131EAD1A", 00:14:43.770 "uuid": "378ef4ec-a964-4b49-be46-5301131ead1a" 00:14:43.770 }, 00:14:43.770 { 00:14:43.770 "nsid": 2, 00:14:43.770 "bdev_name": "Malloc3", 00:14:43.770 "name": "Malloc3", 00:14:43.770 "nguid": "4EB0A5CC4B304C6BBE2F0F1C8F262028", 00:14:43.770 "uuid": "4eb0a5cc-4b30-4c6b-be2f-0f1c8f262028" 00:14:43.770 } 00:14:43.770 ] 00:14:43.770 }, 00:14:43.770 { 00:14:43.770 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:43.770 "subtype": "NVMe", 00:14:43.770 "listen_addresses": [ 00:14:43.770 { 00:14:43.770 "trtype": "VFIOUSER", 00:14:43.770 "adrfam": "IPv4", 00:14:43.770 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:43.770 "trsvcid": "0" 00:14:43.770 } 00:14:43.770 ], 00:14:43.770 "allow_any_host": true, 00:14:43.770 "hosts": [], 00:14:43.771 "serial_number": "SPDK2", 00:14:43.771 "model_number": "SPDK bdev Controller", 00:14:43.771 "max_namespaces": 32, 00:14:43.771 "min_cntlid": 1, 00:14:43.771 "max_cntlid": 65519, 00:14:43.771 "namespaces": [ 00:14:43.771 { 00:14:43.771 "nsid": 1, 00:14:43.771 "bdev_name": "Malloc2", 00:14:43.771 "name": "Malloc2", 00:14:43.771 "nguid": "112EFFC6AD634181A559518DDE4B26C3", 00:14:43.771 "uuid": "112effc6-ad63-4181-a559-518dde4b26c3" 00:14:43.771 } 00:14:43.771 ] 00:14:43.771 } 00:14:43.771 ] 00:14:43.771 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2451782 00:14:43.771 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:43.771 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:43.771 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:43.771 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:43.771 [2024-07-25 07:20:16.080947] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:14:43.771 [2024-07-25 07:20:16.080989] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451916 ] 00:14:43.771 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.771 [2024-07-25 07:20:16.116505] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:43.771 [2024-07-25 07:20:16.118855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:43.771 [2024-07-25 07:20:16.118885] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f80bc454000 00:14:43.771 [2024-07-25 07:20:16.119858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.120863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.121864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.122872] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.123876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.124883] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.125888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.126893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.771 [2024-07-25 07:20:16.127904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:43.771 [2024-07-25 07:20:16.127925] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f80bc449000 00:14:43.771 [2024-07-25 07:20:16.129281] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:43.771 [2024-07-25 07:20:16.149622] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:43.771 [2024-07-25 07:20:16.149658] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:43.771 [2024-07-25 07:20:16.151763] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:43.771 [2024-07-25 07:20:16.151819] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:43.771 [2024-07-25 07:20:16.151911] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:43.771 [2024-07-25 07:20:16.151936] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:43.771 [2024-07-25 07:20:16.151947] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:43.771 [2024-07-25 07:20:16.152755] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:43.771 [2024-07-25 07:20:16.152780] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:43.771 [2024-07-25 07:20:16.152793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:43.771 [2024-07-25 07:20:16.153765] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:43.771 [2024-07-25 07:20:16.153785] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:43.771 [2024-07-25 07:20:16.153806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:43.771 [2024-07-25 07:20:16.154765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:43.771 [2024-07-25 07:20:16.154786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:43.771 [2024-07-25 07:20:16.155770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:43.771 [2024-07-25 07:20:16.155790] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:43.771 [2024-07-25 07:20:16.155799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:43.771 [2024-07-25 07:20:16.155810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:43.771 [2024-07-25 07:20:16.155920] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:43.771 [2024-07-25 07:20:16.155927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:43.771 [2024-07-25 07:20:16.155936] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:43.771 [2024-07-25 07:20:16.156780] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:43.771 [2024-07-25 07:20:16.157788] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:43.771 [2024-07-25 07:20:16.158797] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:43.771 [2024-07-25 07:20:16.159793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.771 [2024-07-25 07:20:16.159877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:43.771 [2024-07-25 07:20:16.160804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:43.771 [2024-07-25 07:20:16.160824] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:43.771 [2024-07-25 07:20:16.160833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:43.771 [2024-07-25 07:20:16.160856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:43.771 [2024-07-25 07:20:16.160873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:43.771 [2024-07-25 07:20:16.160899] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:43.771 [2024-07-25 07:20:16.160908] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.771 [2024-07-25 07:20:16.160915] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.771 [2024-07-25 07:20:16.160935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.771 [2024-07-25 07:20:16.167262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:43.771 [2024-07-25 07:20:16.167300] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:43.771 [2024-07-25 07:20:16.167310] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:43.772 [2024-07-25 07:20:16.167318] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:43.772 [2024-07-25 07:20:16.167326] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:43.772 [2024-07-25 07:20:16.167334] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:43.772 [2024-07-25 07:20:16.167342] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:43.772 [2024-07-25 07:20:16.167350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.167364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.167385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.175256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.175285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.772 [2024-07-25 07:20:16.175301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.772 [2024-07-25 07:20:16.175313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.772 [2024-07-25 07:20:16.175326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.772 [2024-07-25 07:20:16.175335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.175350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.175366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.183266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.183285] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:43.772 [2024-07-25 07:20:16.183295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.183311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.183322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.183336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.191254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.191331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.191348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.191362] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:43.772 [2024-07-25 07:20:16.191371] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:43.772 [2024-07-25 07:20:16.191377] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.772 [2024-07-25 07:20:16.191387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.199266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.199290] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:43.772 [2024-07-25 07:20:16.199307] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.199322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.199335] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:43.772 [2024-07-25 07:20:16.199344] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.772 [2024-07-25 07:20:16.199350] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.772 [2024-07-25 07:20:16.199359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.207250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.207278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.207295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.207309] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:43.772 [2024-07-25 07:20:16.207317] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.772 [2024-07-25 07:20:16.207323] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.772 [2024-07-25 07:20:16.207333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.215252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.215273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215347] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:43.772 [2024-07-25 07:20:16.215355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:43.772 [2024-07-25 07:20:16.215363] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:43.772 [2024-07-25 07:20:16.215391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.223252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.223279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.234255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.234280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.242253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.242279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.250251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.250294] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:43.772 [2024-07-25 07:20:16.250305] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:43.772 [2024-07-25 07:20:16.250312] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:43.772 [2024-07-25 07:20:16.250318] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:43.772 [2024-07-25 07:20:16.250324] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:43.772 [2024-07-25 07:20:16.250334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:43.772 [2024-07-25 07:20:16.250346] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:43.772 [2024-07-25 07:20:16.250355] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:43.772 [2024-07-25 07:20:16.250361] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.772 [2024-07-25 07:20:16.250369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.250381] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:43.772 [2024-07-25 07:20:16.250389] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.772 [2024-07-25 07:20:16.250395] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.772 [2024-07-25 07:20:16.250404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.250416] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:43.772 [2024-07-25 07:20:16.250428] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:43.772 [2024-07-25 07:20:16.250435] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.772 [2024-07-25 07:20:16.250444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:43.772 [2024-07-25 07:20:16.258252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:43.772 [2024-07-25 07:20:16.258289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:43.773 [2024-07-25 07:20:16.258307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:43.773 [2024-07-25 07:20:16.258319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:43.773 ===================================================== 00:14:43.773 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.773 ===================================================== 00:14:43.773 Controller Capabilities/Features 00:14:43.773 ================================ 00:14:43.773 Vendor ID: 4e58 00:14:43.773 Subsystem Vendor ID: 4e58 00:14:43.773 Serial Number: SPDK2 00:14:43.773 Model Number: SPDK bdev Controller 00:14:43.773 Firmware Version: 24.09 00:14:43.773 Recommended Arb Burst: 6 00:14:43.773 IEEE OUI Identifier: 8d 6b 50 00:14:43.773 Multi-path I/O 00:14:43.773 May have multiple subsystem ports: Yes 00:14:43.773 May have multiple controllers: Yes 00:14:43.773 Associated with SR-IOV VF: No 00:14:43.773 Max Data Transfer Size: 131072 00:14:43.773 Max Number of Namespaces: 32 00:14:43.773 Max Number of I/O Queues: 127 00:14:43.773 NVMe Specification Version (VS): 1.3 00:14:43.773 NVMe Specification Version (Identify): 1.3 00:14:43.773 Maximum Queue Entries: 256 00:14:43.773 Contiguous Queues Required: Yes 00:14:43.773 Arbitration Mechanisms Supported 00:14:43.773 Weighted Round Robin: Not Supported 00:14:43.773 Vendor Specific: Not Supported 00:14:43.773 Reset Timeout: 15000 ms 00:14:43.773 Doorbell Stride: 4 bytes 00:14:43.773 NVM Subsystem Reset: Not Supported 00:14:43.773 Command Sets Supported 00:14:43.773 NVM Command Set: Supported 00:14:43.773 Boot Partition: Not Supported 00:14:43.773 Memory Page Size Minimum: 4096 bytes 00:14:43.773 Memory Page Size Maximum: 4096 bytes 00:14:43.773 Persistent Memory Region: Not Supported 00:14:43.773 Optional Asynchronous Events Supported 00:14:43.773 Namespace Attribute Notices: Supported 00:14:43.773 Firmware Activation Notices: Not Supported 00:14:43.773 ANA Change Notices: Not Supported 00:14:43.773 PLE Aggregate Log Change Notices: Not Supported 00:14:43.773 LBA Status Info Alert Notices: Not Supported 00:14:43.773 EGE Aggregate Log Change Notices: Not Supported 00:14:43.773 Normal NVM Subsystem Shutdown event: Not Supported 00:14:43.773 Zone Descriptor Change Notices: Not Supported 00:14:43.773 Discovery Log Change Notices: Not Supported 00:14:43.773 Controller Attributes 00:14:43.773 128-bit Host Identifier: Supported 00:14:43.773 Non-Operational Permissive Mode: Not Supported 00:14:43.773 NVM Sets: Not Supported 00:14:43.773 Read Recovery Levels: Not Supported 00:14:43.773 Endurance Groups: Not Supported 00:14:43.773 Predictable Latency Mode: Not Supported 00:14:43.773 Traffic Based Keep ALive: Not Supported 00:14:43.773 Namespace Granularity: Not Supported 00:14:43.773 SQ Associations: Not Supported 00:14:43.773 UUID List: Not Supported 00:14:43.773 Multi-Domain Subsystem: Not Supported 00:14:43.773 Fixed Capacity Management: Not Supported 00:14:43.773 Variable Capacity Management: Not Supported 00:14:43.773 Delete Endurance Group: Not Supported 00:14:43.773 Delete NVM Set: Not Supported 00:14:43.773 Extended LBA Formats Supported: Not Supported 00:14:43.773 Flexible Data Placement Supported: Not Supported 00:14:43.773 00:14:43.773 Controller Memory Buffer Support 00:14:43.773 ================================ 00:14:43.773 Supported: No 00:14:43.773 00:14:43.773 Persistent Memory Region Support 00:14:43.773 ================================ 00:14:43.773 Supported: No 00:14:43.773 00:14:43.773 Admin Command Set Attributes 00:14:43.773 ============================ 00:14:43.773 Security Send/Receive: Not Supported 00:14:43.773 Format NVM: Not Supported 00:14:43.773 Firmware Activate/Download: Not Supported 00:14:43.773 Namespace Management: Not Supported 00:14:43.773 Device Self-Test: Not Supported 00:14:43.773 Directives: Not Supported 00:14:43.773 NVMe-MI: Not Supported 00:14:43.773 Virtualization Management: Not Supported 00:14:43.773 Doorbell Buffer Config: Not Supported 00:14:43.773 Get LBA Status Capability: Not Supported 00:14:43.773 Command & Feature Lockdown Capability: Not Supported 00:14:43.773 Abort Command Limit: 4 00:14:43.773 Async Event Request Limit: 4 00:14:43.773 Number of Firmware Slots: N/A 00:14:43.773 Firmware Slot 1 Read-Only: N/A 00:14:43.773 Firmware Activation Without Reset: N/A 00:14:43.773 Multiple Update Detection Support: N/A 00:14:43.773 Firmware Update Granularity: No Information Provided 00:14:43.773 Per-Namespace SMART Log: No 00:14:43.773 Asymmetric Namespace Access Log Page: Not Supported 00:14:43.773 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:43.773 Command Effects Log Page: Supported 00:14:43.773 Get Log Page Extended Data: Supported 00:14:43.773 Telemetry Log Pages: Not Supported 00:14:43.773 Persistent Event Log Pages: Not Supported 00:14:43.773 Supported Log Pages Log Page: May Support 00:14:43.773 Commands Supported & Effects Log Page: Not Supported 00:14:43.773 Feature Identifiers & Effects Log Page:May Support 00:14:43.773 NVMe-MI Commands & Effects Log Page: May Support 00:14:43.773 Data Area 4 for Telemetry Log: Not Supported 00:14:43.773 Error Log Page Entries Supported: 128 00:14:43.773 Keep Alive: Supported 00:14:43.773 Keep Alive Granularity: 10000 ms 00:14:43.773 00:14:43.773 NVM Command Set Attributes 00:14:43.773 ========================== 00:14:43.773 Submission Queue Entry Size 00:14:43.773 Max: 64 00:14:43.773 Min: 64 00:14:43.773 Completion Queue Entry Size 00:14:43.773 Max: 16 00:14:43.773 Min: 16 00:14:43.773 Number of Namespaces: 32 00:14:43.773 Compare Command: Supported 00:14:43.773 Write Uncorrectable Command: Not Supported 00:14:43.773 Dataset Management Command: Supported 00:14:43.773 Write Zeroes Command: Supported 00:14:43.773 Set Features Save Field: Not Supported 00:14:43.773 Reservations: Not Supported 00:14:43.773 Timestamp: Not Supported 00:14:43.773 Copy: Supported 00:14:43.773 Volatile Write Cache: Present 00:14:43.773 Atomic Write Unit (Normal): 1 00:14:43.773 Atomic Write Unit (PFail): 1 00:14:43.773 Atomic Compare & Write Unit: 1 00:14:43.773 Fused Compare & Write: Supported 00:14:43.773 Scatter-Gather List 00:14:43.773 SGL Command Set: Supported (Dword aligned) 00:14:43.773 SGL Keyed: Not Supported 00:14:43.773 SGL Bit Bucket Descriptor: Not Supported 00:14:43.773 SGL Metadata Pointer: Not Supported 00:14:43.773 Oversized SGL: Not Supported 00:14:43.773 SGL Metadata Address: Not Supported 00:14:43.773 SGL Offset: Not Supported 00:14:43.773 Transport SGL Data Block: Not Supported 00:14:43.773 Replay Protected Memory Block: Not Supported 00:14:43.773 00:14:43.773 Firmware Slot Information 00:14:43.773 ========================= 00:14:43.773 Active slot: 1 00:14:43.773 Slot 1 Firmware Revision: 24.09 00:14:43.773 00:14:43.773 00:14:43.773 Commands Supported and Effects 00:14:43.773 ============================== 00:14:43.773 Admin Commands 00:14:43.773 -------------- 00:14:43.773 Get Log Page (02h): Supported 00:14:43.773 Identify (06h): Supported 00:14:43.773 Abort (08h): Supported 00:14:43.773 Set Features (09h): Supported 00:14:43.773 Get Features (0Ah): Supported 00:14:43.773 Asynchronous Event Request (0Ch): Supported 00:14:43.773 Keep Alive (18h): Supported 00:14:43.773 I/O Commands 00:14:43.773 ------------ 00:14:43.773 Flush (00h): Supported LBA-Change 00:14:43.773 Write (01h): Supported LBA-Change 00:14:43.773 Read (02h): Supported 00:14:43.773 Compare (05h): Supported 00:14:43.773 Write Zeroes (08h): Supported LBA-Change 00:14:43.773 Dataset Management (09h): Supported LBA-Change 00:14:43.773 Copy (19h): Supported LBA-Change 00:14:43.773 00:14:43.773 Error Log 00:14:43.773 ========= 00:14:43.773 00:14:43.773 Arbitration 00:14:43.773 =========== 00:14:43.773 Arbitration Burst: 1 00:14:43.773 00:14:43.773 Power Management 00:14:43.773 ================ 00:14:43.773 Number of Power States: 1 00:14:43.773 Current Power State: Power State #0 00:14:43.773 Power State #0: 00:14:43.773 Max Power: 0.00 W 00:14:43.773 Non-Operational State: Operational 00:14:43.773 Entry Latency: Not Reported 00:14:43.773 Exit Latency: Not Reported 00:14:43.773 Relative Read Throughput: 0 00:14:43.773 Relative Read Latency: 0 00:14:43.773 Relative Write Throughput: 0 00:14:43.773 Relative Write Latency: 0 00:14:43.773 Idle Power: Not Reported 00:14:43.774 Active Power: Not Reported 00:14:43.774 Non-Operational Permissive Mode: Not Supported 00:14:43.774 00:14:43.774 Health Information 00:14:43.774 ================== 00:14:43.774 Critical Warnings: 00:14:43.774 Available Spare Space: OK 00:14:43.774 Temperature: OK 00:14:43.774 Device Reliability: OK 00:14:43.774 Read Only: No 00:14:43.774 Volatile Memory Backup: OK 00:14:43.774 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:43.774 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:43.774 Available Spare: 0% 00:14:43.774 Available Sp[2024-07-25 07:20:16.258442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:43.774 [2024-07-25 07:20:16.266254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:43.774 [2024-07-25 07:20:16.266303] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:43.774 [2024-07-25 07:20:16.266321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.774 [2024-07-25 07:20:16.266333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.774 [2024-07-25 07:20:16.266343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.774 [2024-07-25 07:20:16.266353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.774 [2024-07-25 07:20:16.266433] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:43.774 [2024-07-25 07:20:16.266454] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:43.774 [2024-07-25 07:20:16.267431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.774 [2024-07-25 07:20:16.267517] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:43.774 [2024-07-25 07:20:16.267547] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:43.774 [2024-07-25 07:20:16.268454] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:43.774 [2024-07-25 07:20:16.268480] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:43.774 [2024-07-25 07:20:16.268534] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:43.774 [2024-07-25 07:20:16.269764] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.032 are Threshold: 0% 00:14:44.032 Life Percentage Used: 0% 00:14:44.032 Data Units Read: 0 00:14:44.032 Data Units Written: 0 00:14:44.032 Host Read Commands: 0 00:14:44.032 Host Write Commands: 0 00:14:44.032 Controller Busy Time: 0 minutes 00:14:44.032 Power Cycles: 0 00:14:44.032 Power On Hours: 0 hours 00:14:44.032 Unsafe Shutdowns: 0 00:14:44.032 Unrecoverable Media Errors: 0 00:14:44.032 Lifetime Error Log Entries: 0 00:14:44.032 Warning Temperature Time: 0 minutes 00:14:44.032 Critical Temperature Time: 0 minutes 00:14:44.032 00:14:44.032 Number of Queues 00:14:44.032 ================ 00:14:44.032 Number of I/O Submission Queues: 127 00:14:44.032 Number of I/O Completion Queues: 127 00:14:44.032 00:14:44.032 Active Namespaces 00:14:44.032 ================= 00:14:44.032 Namespace ID:1 00:14:44.032 Error Recovery Timeout: Unlimited 00:14:44.032 Command Set Identifier: NVM (00h) 00:14:44.032 Deallocate: Supported 00:14:44.032 Deallocated/Unwritten Error: Not Supported 00:14:44.032 Deallocated Read Value: Unknown 00:14:44.032 Deallocate in Write Zeroes: Not Supported 00:14:44.032 Deallocated Guard Field: 0xFFFF 00:14:44.032 Flush: Supported 00:14:44.032 Reservation: Supported 00:14:44.032 Namespace Sharing Capabilities: Multiple Controllers 00:14:44.032 Size (in LBAs): 131072 (0GiB) 00:14:44.032 Capacity (in LBAs): 131072 (0GiB) 00:14:44.032 Utilization (in LBAs): 131072 (0GiB) 00:14:44.032 NGUID: 112EFFC6AD634181A559518DDE4B26C3 00:14:44.032 UUID: 112effc6-ad63-4181-a559-518dde4b26c3 00:14:44.032 Thin Provisioning: Not Supported 00:14:44.032 Per-NS Atomic Units: Yes 00:14:44.032 Atomic Boundary Size (Normal): 0 00:14:44.032 Atomic Boundary Size (PFail): 0 00:14:44.032 Atomic Boundary Offset: 0 00:14:44.032 Maximum Single Source Range Length: 65535 00:14:44.032 Maximum Copy Length: 65535 00:14:44.032 Maximum Source Range Count: 1 00:14:44.032 NGUID/EUI64 Never Reused: No 00:14:44.032 Namespace Write Protected: No 00:14:44.032 Number of LBA Formats: 1 00:14:44.032 Current LBA Format: LBA Format #00 00:14:44.032 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:44.032 00:14:44.032 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:44.032 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.032 [2024-07-25 07:20:16.496135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:49.329 Initializing NVMe Controllers 00:14:49.329 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:49.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:49.329 Initialization complete. Launching workers. 00:14:49.329 ======================================================== 00:14:49.329 Latency(us) 00:14:49.329 Device Information : IOPS MiB/s Average min max 00:14:49.329 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34366.50 134.24 3723.63 1171.35 11461.41 00:14:49.329 ======================================================== 00:14:49.330 Total : 34366.50 134.24 3723.63 1171.35 11461.41 00:14:49.330 00:14:49.330 [2024-07-25 07:20:21.604609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.330 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:49.330 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.330 [2024-07-25 07:20:21.841342] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.592 Initializing NVMe Controllers 00:14:54.592 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:54.592 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:54.592 Initialization complete. Launching workers. 00:14:54.592 ======================================================== 00:14:54.592 Latency(us) 00:14:54.592 Device Information : IOPS MiB/s Average min max 00:14:54.592 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31973.64 124.90 4002.44 1193.94 8335.03 00:14:54.592 ======================================================== 00:14:54.592 Total : 31973.64 124.90 4002.44 1193.94 8335.03 00:14:54.592 00:14:54.592 [2024-07-25 07:20:26.859352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.592 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:54.592 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.592 [2024-07-25 07:20:27.073252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.852 [2024-07-25 07:20:32.218403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.852 Initializing NVMe Controllers 00:14:59.852 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.852 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.852 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:59.852 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:59.852 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:59.852 Initialization complete. Launching workers. 00:14:59.852 Starting thread on core 2 00:14:59.852 Starting thread on core 3 00:14:59.852 Starting thread on core 1 00:14:59.852 07:20:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:59.852 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.110 [2024-07-25 07:20:32.523743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.392 [2024-07-25 07:20:35.606599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.392 Initializing NVMe Controllers 00:15:03.392 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.392 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.392 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:03.392 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:03.392 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:03.392 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:03.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:03.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:03.392 Initialization complete. Launching workers. 00:15:03.392 Starting thread on core 1 with urgent priority queue 00:15:03.392 Starting thread on core 2 with urgent priority queue 00:15:03.392 Starting thread on core 3 with urgent priority queue 00:15:03.392 Starting thread on core 0 with urgent priority queue 00:15:03.392 SPDK bdev Controller (SPDK2 ) core 0: 4529.67 IO/s 22.08 secs/100000 ios 00:15:03.392 SPDK bdev Controller (SPDK2 ) core 1: 4753.67 IO/s 21.04 secs/100000 ios 00:15:03.392 SPDK bdev Controller (SPDK2 ) core 2: 5876.00 IO/s 17.02 secs/100000 ios 00:15:03.392 SPDK bdev Controller (SPDK2 ) core 3: 5648.00 IO/s 17.71 secs/100000 ios 00:15:03.392 ======================================================== 00:15:03.392 00:15:03.392 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:03.392 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.392 [2024-07-25 07:20:35.907788] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.392 Initializing NVMe Controllers 00:15:03.392 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.392 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.392 Namespace ID: 1 size: 0GB 00:15:03.392 Initialization complete. 00:15:03.392 INFO: using host memory buffer for IO 00:15:03.392 Hello world! 00:15:03.392 [2024-07-25 07:20:35.917875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.649 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:03.649 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.907 [2024-07-25 07:20:36.197761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:04.841 Initializing NVMe Controllers 00:15:04.841 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:04.841 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:04.841 Initialization complete. Launching workers. 00:15:04.841 submit (in ns) avg, min, max = 9777.0, 3517.8, 4017333.3 00:15:04.841 complete (in ns) avg, min, max = 26438.1, 2062.2, 4015427.8 00:15:04.841 00:15:04.841 Submit histogram 00:15:04.841 ================ 00:15:04.841 Range in us Cumulative Count 00:15:04.841 3.508 - 3.532: 0.1612% ( 21) 00:15:04.841 3.532 - 3.556: 0.5219% ( 47) 00:15:04.841 3.556 - 3.579: 2.1874% ( 217) 00:15:04.841 3.579 - 3.603: 5.7564% ( 465) 00:15:04.841 3.603 - 3.627: 11.3593% ( 730) 00:15:04.841 3.627 - 3.650: 19.9632% ( 1121) 00:15:04.841 3.650 - 3.674: 29.3039% ( 1217) 00:15:04.841 3.674 - 3.698: 38.6676% ( 1220) 00:15:04.841 3.698 - 3.721: 47.2254% ( 1115) 00:15:04.841 3.721 - 3.745: 53.8491% ( 863) 00:15:04.841 3.745 - 3.769: 58.7305% ( 636) 00:15:04.841 3.769 - 3.793: 62.9979% ( 556) 00:15:04.841 3.793 - 3.816: 66.4287% ( 447) 00:15:04.841 3.816 - 3.840: 70.3354% ( 509) 00:15:04.841 3.840 - 3.864: 73.5590% ( 420) 00:15:04.841 3.864 - 3.887: 77.0819% ( 459) 00:15:04.841 3.887 - 3.911: 80.8734% ( 494) 00:15:04.841 3.911 - 3.935: 84.4040% ( 460) 00:15:04.841 3.935 - 3.959: 86.8831% ( 323) 00:15:04.841 3.959 - 3.982: 88.8480% ( 256) 00:15:04.841 3.982 - 4.006: 90.4674% ( 211) 00:15:04.841 4.006 - 4.030: 91.7722% ( 170) 00:15:04.841 4.030 - 4.053: 93.0079% ( 161) 00:15:04.841 4.053 - 4.077: 94.0594% ( 137) 00:15:04.841 4.077 - 4.101: 94.7885% ( 95) 00:15:04.841 4.101 - 4.124: 95.4179% ( 82) 00:15:04.841 4.124 - 4.148: 95.9859% ( 74) 00:15:04.841 4.148 - 4.172: 96.4387% ( 59) 00:15:04.841 4.172 - 4.196: 96.7304% ( 38) 00:15:04.841 4.196 - 4.219: 96.8301% ( 13) 00:15:04.841 4.219 - 4.243: 96.9530% ( 16) 00:15:04.841 4.243 - 4.267: 97.0758% ( 16) 00:15:04.841 4.267 - 4.290: 97.1986% ( 16) 00:15:04.841 4.290 - 4.314: 97.2676% ( 9) 00:15:04.841 4.314 - 4.338: 97.3828% ( 15) 00:15:04.841 4.338 - 4.361: 97.4749% ( 12) 00:15:04.841 4.361 - 4.385: 97.5209% ( 6) 00:15:04.841 4.385 - 4.409: 97.5516% ( 4) 00:15:04.841 4.409 - 4.433: 97.5593% ( 1) 00:15:04.841 4.433 - 4.456: 97.5900% ( 4) 00:15:04.841 4.456 - 4.480: 97.5977% ( 1) 00:15:04.841 4.480 - 4.504: 97.6130% ( 2) 00:15:04.841 4.504 - 4.527: 97.6284% ( 2) 00:15:04.841 4.527 - 4.551: 97.6437% ( 2) 00:15:04.841 4.551 - 4.575: 97.6667% ( 3) 00:15:04.841 4.575 - 4.599: 97.6744% ( 1) 00:15:04.841 4.622 - 4.646: 97.6898% ( 2) 00:15:04.841 4.646 - 4.670: 97.6974% ( 1) 00:15:04.841 4.670 - 4.693: 97.7128% ( 2) 00:15:04.841 4.717 - 4.741: 97.7281% ( 2) 00:15:04.841 4.741 - 4.764: 97.7358% ( 1) 00:15:04.841 4.764 - 4.788: 97.7665% ( 4) 00:15:04.841 4.788 - 4.812: 97.7972% ( 4) 00:15:04.841 4.812 - 4.836: 97.8049% ( 1) 00:15:04.841 4.836 - 4.859: 97.8202% ( 2) 00:15:04.841 4.859 - 4.883: 97.8740% ( 7) 00:15:04.841 4.883 - 4.907: 97.8816% ( 1) 00:15:04.841 4.907 - 4.930: 97.9123% ( 4) 00:15:04.841 4.930 - 4.954: 97.9507% ( 5) 00:15:04.841 4.954 - 4.978: 97.9891% ( 5) 00:15:04.841 4.978 - 5.001: 98.0275% ( 5) 00:15:04.841 5.001 - 5.025: 98.0966% ( 9) 00:15:04.841 5.025 - 5.049: 98.1119% ( 2) 00:15:04.841 5.049 - 5.073: 98.1426% ( 4) 00:15:04.841 5.073 - 5.096: 98.1887% ( 6) 00:15:04.841 5.096 - 5.120: 98.2040% ( 2) 00:15:04.841 5.120 - 5.144: 98.2270% ( 3) 00:15:04.841 5.167 - 5.191: 98.2654% ( 5) 00:15:04.841 5.191 - 5.215: 98.3038% ( 5) 00:15:04.841 5.262 - 5.286: 98.3115% ( 1) 00:15:04.841 5.286 - 5.310: 98.3345% ( 3) 00:15:04.841 5.310 - 5.333: 98.3422% ( 1) 00:15:04.841 5.333 - 5.357: 98.3575% ( 2) 00:15:04.841 5.357 - 5.381: 98.3652% ( 1) 00:15:04.842 5.381 - 5.404: 98.3729% ( 1) 00:15:04.842 5.404 - 5.428: 98.3882% ( 2) 00:15:04.842 5.428 - 5.452: 98.3959% ( 1) 00:15:04.842 5.452 - 5.476: 98.4036% ( 1) 00:15:04.842 5.499 - 5.523: 98.4112% ( 1) 00:15:04.842 5.618 - 5.641: 98.4189% ( 1) 00:15:04.842 5.641 - 5.665: 98.4266% ( 1) 00:15:04.842 5.760 - 5.784: 98.4343% ( 1) 00:15:04.842 5.807 - 5.831: 98.4419% ( 1) 00:15:04.842 6.021 - 6.044: 98.4496% ( 1) 00:15:04.842 6.116 - 6.163: 98.4573% ( 1) 00:15:04.842 6.258 - 6.305: 98.4650% ( 1) 00:15:04.842 6.305 - 6.353: 98.4803% ( 2) 00:15:04.842 6.447 - 6.495: 98.4880% ( 1) 00:15:04.842 6.542 - 6.590: 98.4957% ( 1) 00:15:04.842 6.732 - 6.779: 98.5187% ( 3) 00:15:04.842 6.779 - 6.827: 98.5264% ( 1) 00:15:04.842 6.921 - 6.969: 98.5340% ( 1) 00:15:04.842 6.969 - 7.016: 98.5417% ( 1) 00:15:04.842 7.064 - 7.111: 98.5494% ( 1) 00:15:04.842 7.159 - 7.206: 98.5647% ( 2) 00:15:04.842 7.206 - 7.253: 98.5801% ( 2) 00:15:04.842 7.253 - 7.301: 98.5954% ( 2) 00:15:04.842 7.348 - 7.396: 98.6185% ( 3) 00:15:04.842 7.443 - 7.490: 98.6261% ( 1) 00:15:04.842 7.538 - 7.585: 98.6338% ( 1) 00:15:04.842 7.680 - 7.727: 98.6415% ( 1) 00:15:04.842 7.727 - 7.775: 98.6492% ( 1) 00:15:04.842 7.775 - 7.822: 98.6645% ( 2) 00:15:04.842 7.822 - 7.870: 98.6799% ( 2) 00:15:04.842 7.917 - 7.964: 98.6875% ( 1) 00:15:04.842 7.964 - 8.012: 98.6952% ( 1) 00:15:04.842 8.012 - 8.059: 98.7029% ( 1) 00:15:04.842 8.107 - 8.154: 98.7106% ( 1) 00:15:04.842 8.154 - 8.201: 98.7182% ( 1) 00:15:04.842 8.201 - 8.249: 98.7336% ( 2) 00:15:04.842 8.249 - 8.296: 98.7489% ( 2) 00:15:04.842 8.296 - 8.344: 98.7566% ( 1) 00:15:04.842 8.344 - 8.391: 98.7643% ( 1) 00:15:04.842 8.439 - 8.486: 98.7720% ( 1) 00:15:04.842 8.533 - 8.581: 98.7796% ( 1) 00:15:04.842 9.007 - 9.055: 98.7873% ( 1) 00:15:04.842 9.339 - 9.387: 98.7950% ( 1) 00:15:04.842 9.719 - 9.766: 98.8027% ( 1) 00:15:04.842 9.766 - 9.813: 98.8103% ( 1) 00:15:04.842 9.956 - 10.003: 98.8180% ( 1) 00:15:04.842 10.003 - 10.050: 98.8257% ( 1) 00:15:04.842 10.050 - 10.098: 98.8334% ( 1) 00:15:04.842 10.145 - 10.193: 98.8410% ( 1) 00:15:04.842 10.430 - 10.477: 98.8487% ( 1) 00:15:04.842 10.572 - 10.619: 98.8564% ( 1) 00:15:04.842 10.667 - 10.714: 98.8641% ( 1) 00:15:04.842 10.856 - 10.904: 98.8717% ( 1) 00:15:04.842 11.473 - 11.520: 98.8794% ( 1) 00:15:04.842 11.804 - 11.852: 98.8871% ( 1) 00:15:04.842 12.516 - 12.610: 98.8948% ( 1) 00:15:04.842 12.705 - 12.800: 98.9024% ( 1) 00:15:04.842 13.369 - 13.464: 98.9101% ( 1) 00:15:04.842 13.464 - 13.559: 98.9178% ( 1) 00:15:04.842 13.653 - 13.748: 98.9255% ( 1) 00:15:04.842 13.843 - 13.938: 98.9331% ( 1) 00:15:04.842 14.222 - 14.317: 98.9485% ( 2) 00:15:04.842 14.696 - 14.791: 98.9562% ( 1) 00:15:04.842 14.886 - 14.981: 98.9638% ( 1) 00:15:04.842 17.256 - 17.351: 98.9715% ( 1) 00:15:04.842 17.351 - 17.446: 99.0022% ( 4) 00:15:04.842 17.446 - 17.541: 99.0329% ( 4) 00:15:04.842 17.541 - 17.636: 99.0483% ( 2) 00:15:04.842 17.636 - 17.730: 99.0713% ( 3) 00:15:04.842 17.730 - 17.825: 99.1327% ( 8) 00:15:04.842 17.825 - 17.920: 99.1481% ( 2) 00:15:04.842 17.920 - 18.015: 99.1941% ( 6) 00:15:04.842 18.015 - 18.110: 99.2478% ( 7) 00:15:04.842 18.110 - 18.204: 99.3169% ( 9) 00:15:04.842 18.204 - 18.299: 99.4013% ( 11) 00:15:04.842 18.299 - 18.394: 99.4858% ( 11) 00:15:04.842 18.394 - 18.489: 99.5548% ( 9) 00:15:04.842 18.489 - 18.584: 99.5932% ( 5) 00:15:04.842 18.584 - 18.679: 99.6086% ( 2) 00:15:04.842 18.679 - 18.773: 99.6469% ( 5) 00:15:04.842 18.773 - 18.868: 99.6546% ( 1) 00:15:04.842 18.868 - 18.963: 99.6930% ( 5) 00:15:04.842 18.963 - 19.058: 99.7007% ( 1) 00:15:04.842 19.058 - 19.153: 99.7160% ( 2) 00:15:04.842 19.247 - 19.342: 99.7314% ( 2) 00:15:04.842 19.342 - 19.437: 99.7544% ( 3) 00:15:04.842 19.437 - 19.532: 99.7851% ( 4) 00:15:04.842 19.532 - 19.627: 99.7928% ( 1) 00:15:04.842 19.627 - 19.721: 99.8004% ( 1) 00:15:04.842 19.721 - 19.816: 99.8081% ( 1) 00:15:04.842 19.911 - 20.006: 99.8158% ( 1) 00:15:04.842 20.196 - 20.290: 99.8235% ( 1) 00:15:04.842 20.480 - 20.575: 99.8311% ( 1) 00:15:04.842 21.049 - 21.144: 99.8388% ( 1) 00:15:04.842 28.444 - 28.634: 99.8465% ( 1) 00:15:04.842 29.772 - 29.961: 99.8542% ( 1) 00:15:04.842 3980.705 - 4004.978: 99.9309% ( 10) 00:15:04.842 4004.978 - 4029.250: 100.0000% ( 9) 00:15:04.842 00:15:04.842 Complete histogram 00:15:04.842 ================== 00:15:04.842 Range in us Cumulative Count 00:15:04.842 2.062 - 2.074: 7.3759% ( 961) 00:15:04.842 2.074 - 2.086: 40.8780% ( 4365) 00:15:04.842 2.086 - 2.098: 46.7649% ( 767) 00:15:04.842 2.098 - 2.110: 53.0432% ( 818) 00:15:04.842 2.110 - 2.121: 61.5627% ( 1110) 00:15:04.842 2.121 - 2.133: 63.3433% ( 232) 00:15:04.842 2.133 - 2.145: 69.7444% ( 834) 00:15:04.842 2.145 - 2.157: 79.2156% ( 1234) 00:15:04.842 2.157 - 2.169: 80.6662% ( 189) 00:15:04.842 2.169 - 2.181: 84.0356% ( 439) 00:15:04.842 2.181 - 2.193: 87.1824% ( 410) 00:15:04.842 2.193 - 2.204: 88.0958% ( 119) 00:15:04.842 2.204 - 2.216: 89.3468% ( 163) 00:15:04.842 2.216 - 2.228: 91.6033% ( 294) 00:15:04.842 2.228 - 2.240: 93.4607% ( 242) 00:15:04.842 2.240 - 2.252: 94.2743% ( 106) 00:15:04.842 2.252 - 2.264: 94.8500% ( 75) 00:15:04.842 2.264 - 2.276: 94.9881% ( 18) 00:15:04.842 2.276 - 2.287: 95.1723% ( 24) 00:15:04.842 2.287 - 2.299: 95.4716% ( 39) 00:15:04.842 2.299 - 2.311: 95.7940% ( 42) 00:15:04.842 2.311 - 2.323: 95.9398% ( 19) 00:15:04.842 2.323 - 2.335: 95.9859% ( 6) 00:15:04.842 2.335 - 2.347: 96.1240% ( 18) 00:15:04.842 2.347 - 2.359: 96.3082% ( 24) 00:15:04.842 2.359 - 2.370: 96.6383% ( 43) 00:15:04.842 2.370 - 2.382: 96.9606% ( 42) 00:15:04.842 2.382 - 2.394: 97.3444% ( 50) 00:15:04.842 2.394 - 2.406: 97.5670% ( 29) 00:15:04.842 2.406 - 2.418: 97.8586% ( 38) 00:15:04.842 2.418 - 2.430: 98.0659% ( 27) 00:15:04.842 2.430 - 2.441: 98.2424% ( 23) 00:15:04.842 2.441 - 2.453: 98.4112% ( 22) 00:15:04.842 2.453 - 2.465: 98.4573% ( 6) 00:15:04.842 2.465 - 2.477: 98.5033% ( 6) 00:15:04.842 2.477 - 2.489: 98.5110% ( 1) 00:15:04.842 2.489 - 2.501: 98.5340% ( 3) 00:15:04.842 2.501 - 2.513: 98.5724% ( 5) 00:15:04.842 2.513 - 2.524: 98.6031% ( 4) 00:15:04.842 2.536 - 2.548: 98.6108% ( 1) 00:15:04.842 2.560 - 2.572: 98.6185% ( 1) 00:15:04.842 2.572 - 2.584: 98.6261% ( 1) 00:15:04.842 2.607 - 2.619: 98.6338% ( 1) 00:15:04.842 2.619 - 2.631: 98.6415% ( 1) 00:15:04.842 2.631 - 2.643: 98.6645% ( 3) 00:15:04.842 2.643 - 2.655: 98.6722% ( 1) 00:15:04.842 2.667 - 2.679: 98.6952% ( 3) 00:15:04.842 2.679 - 2.690: 98.7029% ( 1) 00:15:04.842 2.738 - 2.750: 98.7106% ( 1) 00:15:04.842 3.271 - 3.295: 98.7182% ( 1) 00:15:04.842 3.319 - 3.342: 98.7259% ( 1) 00:15:04.842 3.342 - 3.366: 98.7336% ( 1) 00:15:04.842 3.413 - 3.437: 98.7413% ( 1) 00:15:04.842 3.437 - 3.461: 98.7566% ( 2) 00:15:04.842 3.461 - 3.484: 98.7720% ( 2) 00:15:04.842 3.484 - 3.508: 98.7796% ( 1) 00:15:04.842 3.508 - 3.532: 98.7873% ( 1) 00:15:04.842 3.579 - 3.603: 98.7950% ( 1) 00:15:04.842 3.627 - 3.650: 98.8027% ( 1) 00:15:04.842 3.650 - 3.674: 98.8103% ( 1) 00:15:04.842 3.674 - 3.698: 98.8257% ( 2) 00:15:04.842 3.793 - 3.816: 98.8410% ( 2) 00:15:04.842 3.864 - 3.887: 98.8487% ( 1) 00:15:04.842 5.001 - 5.025: 98.8564% ( 1) 00:15:04.842 5.239 - 5.262: 98.8641% ( 1) 00:15:04.842 5.855 - 5.879: 98.8717% ( 1) 00:15:04.842 5.902 - 5.926: 98.8794% ( 1) 00:15:04.842 5.997 - 6.021: 98.8871% ( 1) 00:15:04.842 6.305 - 6.353: 98.8948% ( 1) 00:15:04.842 6.590 - 6.637: 98.9024% ( 1) 00:15:04.842 7.490 - 7.538: 98.9101% ( 1) 00:15:04.842 8.012 - 8.059: 98.9178% ( 1) 00:15:04.842 8.439 - 8.486: 98.9331% ( 2) 00:15:04.842 15.170 - 15.265: 98.9408% ( 1) 00:15:04.842 15.550 - 15.644: 98.9485% ( 1) 00:15:04.842 15.644 - 15.739: 98.9562% ( 1) 00:15:04.842 15.834 - 15.929: 98.9638% ( 1) 00:15:04.842 15.929 - 16.024: 98.9792% ( 2) 00:15:04.842 16.024 - 16.119: 99.0022% ( 3) 00:15:04.843 16.119 - 16.213: 9[2024-07-25 07:20:37.295980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:04.843 9.0406% ( 5) 00:15:04.843 16.213 - 16.308: 99.0713% ( 4) 00:15:04.843 16.308 - 16.403: 99.0790% ( 1) 00:15:04.843 16.403 - 16.498: 99.1097% ( 4) 00:15:04.843 16.498 - 16.593: 99.1711% ( 8) 00:15:04.843 16.593 - 16.687: 99.2018% ( 4) 00:15:04.843 16.687 - 16.782: 99.2095% ( 1) 00:15:04.843 16.782 - 16.877: 99.2478% ( 5) 00:15:04.843 16.877 - 16.972: 99.2709% ( 3) 00:15:04.843 16.972 - 17.067: 99.2785% ( 1) 00:15:04.843 17.067 - 17.161: 99.2939% ( 2) 00:15:04.843 17.161 - 17.256: 99.3246% ( 4) 00:15:04.843 17.446 - 17.541: 99.3399% ( 2) 00:15:04.843 17.730 - 17.825: 99.3476% ( 1) 00:15:04.843 17.825 - 17.920: 99.3553% ( 1) 00:15:04.843 18.015 - 18.110: 99.3630% ( 1) 00:15:04.843 18.394 - 18.489: 99.3783% ( 2) 00:15:04.843 18.773 - 18.868: 99.3860% ( 1) 00:15:04.843 20.954 - 21.049: 99.3937% ( 1) 00:15:04.843 3616.616 - 3640.889: 99.4013% ( 1) 00:15:04.843 3980.705 - 4004.978: 99.7851% ( 50) 00:15:04.843 4004.978 - 4029.250: 100.0000% ( 28) 00:15:04.843 00:15:04.843 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:04.843 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:04.843 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:04.843 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:04.843 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:05.101 [ 00:15:05.101 { 00:15:05.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.101 "subtype": "Discovery", 00:15:05.101 "listen_addresses": [], 00:15:05.101 "allow_any_host": true, 00:15:05.101 "hosts": [] 00:15:05.101 }, 00:15:05.101 { 00:15:05.101 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:05.101 "subtype": "NVMe", 00:15:05.101 "listen_addresses": [ 00:15:05.101 { 00:15:05.101 "trtype": "VFIOUSER", 00:15:05.101 "adrfam": "IPv4", 00:15:05.101 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:05.101 "trsvcid": "0" 00:15:05.101 } 00:15:05.101 ], 00:15:05.101 "allow_any_host": true, 00:15:05.101 "hosts": [], 00:15:05.101 "serial_number": "SPDK1", 00:15:05.101 "model_number": "SPDK bdev Controller", 00:15:05.101 "max_namespaces": 32, 00:15:05.101 "min_cntlid": 1, 00:15:05.101 "max_cntlid": 65519, 00:15:05.101 "namespaces": [ 00:15:05.101 { 00:15:05.101 "nsid": 1, 00:15:05.101 "bdev_name": "Malloc1", 00:15:05.101 "name": "Malloc1", 00:15:05.101 "nguid": "378EF4ECA9644B49BE465301131EAD1A", 00:15:05.101 "uuid": "378ef4ec-a964-4b49-be46-5301131ead1a" 00:15:05.101 }, 00:15:05.101 { 00:15:05.101 "nsid": 2, 00:15:05.101 "bdev_name": "Malloc3", 00:15:05.101 "name": "Malloc3", 00:15:05.101 "nguid": "4EB0A5CC4B304C6BBE2F0F1C8F262028", 00:15:05.101 "uuid": "4eb0a5cc-4b30-4c6b-be2f-0f1c8f262028" 00:15:05.101 } 00:15:05.101 ] 00:15:05.101 }, 00:15:05.101 { 00:15:05.101 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:05.101 "subtype": "NVMe", 00:15:05.101 "listen_addresses": [ 00:15:05.101 { 00:15:05.101 "trtype": "VFIOUSER", 00:15:05.101 "adrfam": "IPv4", 00:15:05.101 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:05.101 "trsvcid": "0" 00:15:05.101 } 00:15:05.101 ], 00:15:05.101 "allow_any_host": true, 00:15:05.101 "hosts": [], 00:15:05.101 "serial_number": "SPDK2", 00:15:05.101 "model_number": "SPDK bdev Controller", 00:15:05.101 "max_namespaces": 32, 00:15:05.101 "min_cntlid": 1, 00:15:05.101 "max_cntlid": 65519, 00:15:05.101 "namespaces": [ 00:15:05.101 { 00:15:05.101 "nsid": 1, 00:15:05.101 "bdev_name": "Malloc2", 00:15:05.101 "name": "Malloc2", 00:15:05.101 "nguid": "112EFFC6AD634181A559518DDE4B26C3", 00:15:05.101 "uuid": "112effc6-ad63-4181-a559-518dde4b26c3" 00:15:05.101 } 00:15:05.101 ] 00:15:05.101 } 00:15:05.101 ] 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2454436 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:05.101 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:05.359 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.359 [2024-07-25 07:20:37.749752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.359 Malloc4 00:15:05.359 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:05.617 [2024-07-25 07:20:38.102369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.617 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:05.875 Asynchronous Event Request test 00:15:05.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:05.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:05.875 Registering asynchronous event callbacks... 00:15:05.875 Starting namespace attribute notice tests for all controllers... 00:15:05.875 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:05.875 aer_cb - Changed Namespace 00:15:05.875 Cleaning up... 00:15:05.875 [ 00:15:05.875 { 00:15:05.875 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.875 "subtype": "Discovery", 00:15:05.876 "listen_addresses": [], 00:15:05.876 "allow_any_host": true, 00:15:05.876 "hosts": [] 00:15:05.876 }, 00:15:05.876 { 00:15:05.876 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:05.876 "subtype": "NVMe", 00:15:05.876 "listen_addresses": [ 00:15:05.876 { 00:15:05.876 "trtype": "VFIOUSER", 00:15:05.876 "adrfam": "IPv4", 00:15:05.876 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:05.876 "trsvcid": "0" 00:15:05.876 } 00:15:05.876 ], 00:15:05.876 "allow_any_host": true, 00:15:05.876 "hosts": [], 00:15:05.876 "serial_number": "SPDK1", 00:15:05.876 "model_number": "SPDK bdev Controller", 00:15:05.876 "max_namespaces": 32, 00:15:05.876 "min_cntlid": 1, 00:15:05.876 "max_cntlid": 65519, 00:15:05.876 "namespaces": [ 00:15:05.876 { 00:15:05.876 "nsid": 1, 00:15:05.876 "bdev_name": "Malloc1", 00:15:05.876 "name": "Malloc1", 00:15:05.876 "nguid": "378EF4ECA9644B49BE465301131EAD1A", 00:15:05.876 "uuid": "378ef4ec-a964-4b49-be46-5301131ead1a" 00:15:05.876 }, 00:15:05.876 { 00:15:05.876 "nsid": 2, 00:15:05.876 "bdev_name": "Malloc3", 00:15:05.876 "name": "Malloc3", 00:15:05.876 "nguid": "4EB0A5CC4B304C6BBE2F0F1C8F262028", 00:15:05.876 "uuid": "4eb0a5cc-4b30-4c6b-be2f-0f1c8f262028" 00:15:05.876 } 00:15:05.876 ] 00:15:05.876 }, 00:15:05.876 { 00:15:05.876 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:05.876 "subtype": "NVMe", 00:15:05.876 "listen_addresses": [ 00:15:05.876 { 00:15:05.876 "trtype": "VFIOUSER", 00:15:05.876 "adrfam": "IPv4", 00:15:05.876 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:05.876 "trsvcid": "0" 00:15:05.876 } 00:15:05.876 ], 00:15:05.876 "allow_any_host": true, 00:15:05.876 "hosts": [], 00:15:05.876 "serial_number": "SPDK2", 00:15:05.876 "model_number": "SPDK bdev Controller", 00:15:05.876 "max_namespaces": 32, 00:15:05.876 "min_cntlid": 1, 00:15:05.876 "max_cntlid": 65519, 00:15:05.876 "namespaces": [ 00:15:05.876 { 00:15:05.876 "nsid": 1, 00:15:05.876 "bdev_name": "Malloc2", 00:15:05.876 "name": "Malloc2", 00:15:05.876 "nguid": "112EFFC6AD634181A559518DDE4B26C3", 00:15:05.876 "uuid": "112effc6-ad63-4181-a559-518dde4b26c3" 00:15:05.876 }, 00:15:05.876 { 00:15:05.876 "nsid": 2, 00:15:05.876 "bdev_name": "Malloc4", 00:15:05.876 "name": "Malloc4", 00:15:05.876 "nguid": "D2313765B8144AA1A9494D93088C422E", 00:15:05.876 "uuid": "d2313765-b814-4aa1-a949-4d93088c422e" 00:15:05.876 } 00:15:05.876 ] 00:15:05.876 } 00:15:05.876 ] 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2454436 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2448839 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2448839 ']' 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2448839 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2448839 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2448839' 00:15:05.876 killing process with pid 2448839 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2448839 00:15:05.876 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2448839 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2454578 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2454578' 00:15:06.443 Process pid: 2454578 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2454578 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2454578 ']' 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.443 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:06.443 [2024-07-25 07:20:38.809230] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:06.443 [2024-07-25 07:20:38.810224] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:15:06.443 [2024-07-25 07:20:38.810290] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.443 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.443 [2024-07-25 07:20:38.867166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.701 [2024-07-25 07:20:38.974503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.701 [2024-07-25 07:20:38.974551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.701 [2024-07-25 07:20:38.974566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.701 [2024-07-25 07:20:38.974578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.701 [2024-07-25 07:20:38.974589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.701 [2024-07-25 07:20:38.974667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.701 [2024-07-25 07:20:38.974718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.701 [2024-07-25 07:20:38.974745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.701 [2024-07-25 07:20:38.974748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.701 [2024-07-25 07:20:39.068549] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:06.701 [2024-07-25 07:20:39.068783] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:06.701 [2024-07-25 07:20:39.069079] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:06.701 [2024-07-25 07:20:39.069735] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:06.701 [2024-07-25 07:20:39.069978] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:06.701 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.701 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:06.701 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:07.632 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:07.890 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:07.891 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:07.891 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.891 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:07.891 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:08.148 Malloc1 00:15:08.148 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:08.407 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:08.664 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:08.921 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.921 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:08.921 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:09.179 Malloc2 00:15:09.179 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:09.786 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:09.786 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2454578 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2454578 ']' 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2454578 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2454578 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2454578' 00:15:10.352 killing process with pid 2454578 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2454578 00:15:10.352 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2454578 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:10.610 00:15:10.610 real 0m52.839s 00:15:10.610 user 3m28.153s 00:15:10.610 sys 0m4.544s 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:10.610 ************************************ 00:15:10.610 END TEST nvmf_vfio_user 00:15:10.610 ************************************ 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.610 07:20:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:10.610 ************************************ 00:15:10.610 START TEST nvmf_vfio_user_nvme_compliance 00:15:10.610 ************************************ 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:10.610 * Looking for test storage... 00:15:10.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.610 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2455169 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2455169' 00:15:10.611 Process pid: 2455169 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2455169 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2455169 ']' 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.611 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:10.611 [2024-07-25 07:20:43.116796] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:15:10.611 [2024-07-25 07:20:43.116900] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.869 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.869 [2024-07-25 07:20:43.177514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:10.869 [2024-07-25 07:20:43.284932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.869 [2024-07-25 07:20:43.284982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.869 [2024-07-25 07:20:43.284998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.869 [2024-07-25 07:20:43.285012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.869 [2024-07-25 07:20:43.285023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.869 [2024-07-25 07:20:43.285112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.869 [2024-07-25 07:20:43.285182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.869 [2024-07-25 07:20:43.285164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.869 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.870 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:10.870 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.241 malloc0 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.241 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:12.241 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.241 00:15:12.241 00:15:12.241 CUnit - A unit testing framework for C - Version 2.1-3 00:15:12.241 http://cunit.sourceforge.net/ 00:15:12.241 00:15:12.241 00:15:12.241 Suite: nvme_compliance 00:15:12.241 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 07:20:44.613755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.241 [2024-07-25 07:20:44.616216] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:12.241 [2024-07-25 07:20:44.616262] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:12.241 [2024-07-25 07:20:44.616276] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:12.241 [2024-07-25 07:20:44.617786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.241 passed 00:15:12.241 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 07:20:44.704415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.241 [2024-07-25 07:20:44.707437] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.241 passed 00:15:12.498 Test: admin_identify_ns ...[2024-07-25 07:20:44.794204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.498 [2024-07-25 07:20:44.851262] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:12.498 [2024-07-25 07:20:44.859263] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:12.498 [2024-07-25 07:20:44.880383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.498 passed 00:15:12.498 Test: admin_get_features_mandatory_features ...[2024-07-25 07:20:44.963777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.498 [2024-07-25 07:20:44.968809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.498 passed 00:15:12.756 Test: admin_get_features_optional_features ...[2024-07-25 07:20:45.052348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.756 [2024-07-25 07:20:45.055364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.756 passed 00:15:12.756 Test: admin_set_features_number_of_queues ...[2024-07-25 07:20:45.139769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.756 [2024-07-25 07:20:45.244360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.756 passed 00:15:13.014 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 07:20:45.329425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.014 [2024-07-25 07:20:45.332449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.014 passed 00:15:13.014 Test: admin_get_log_page_with_lpo ...[2024-07-25 07:20:45.413679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.014 [2024-07-25 07:20:45.481284] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:13.014 [2024-07-25 07:20:45.494355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.014 passed 00:15:13.272 Test: fabric_property_get ...[2024-07-25 07:20:45.579124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.272 [2024-07-25 07:20:45.580419] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:13.272 [2024-07-25 07:20:45.582145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.272 passed 00:15:13.272 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 07:20:45.666764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.272 [2024-07-25 07:20:45.668076] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:13.272 [2024-07-25 07:20:45.669800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.272 passed 00:15:13.272 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 07:20:45.754792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.538 [2024-07-25 07:20:45.838253] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:13.538 [2024-07-25 07:20:45.854269] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:13.538 [2024-07-25 07:20:45.859343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.538 passed 00:15:13.539 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 07:20:45.942956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.539 [2024-07-25 07:20:45.944298] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:13.539 [2024-07-25 07:20:45.945980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.539 passed 00:15:13.539 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 07:20:46.031274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.796 [2024-07-25 07:20:46.104268] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:13.796 [2024-07-25 07:20:46.131252] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:13.796 [2024-07-25 07:20:46.136367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.796 passed 00:15:13.796 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 07:20:46.219954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.796 [2024-07-25 07:20:46.221297] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:13.796 [2024-07-25 07:20:46.221334] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:13.796 [2024-07-25 07:20:46.222982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.797 passed 00:15:13.797 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 07:20:46.304844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.054 [2024-07-25 07:20:46.397265] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:14.054 [2024-07-25 07:20:46.405266] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:14.054 [2024-07-25 07:20:46.413265] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:14.055 [2024-07-25 07:20:46.421266] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:14.055 [2024-07-25 07:20:46.450393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.055 passed 00:15:14.055 Test: admin_create_io_sq_verify_pc ...[2024-07-25 07:20:46.531009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.055 [2024-07-25 07:20:46.549266] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:14.055 [2024-07-25 07:20:46.566362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.313 passed 00:15:14.313 Test: admin_create_io_qp_max_qps ...[2024-07-25 07:20:46.649919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.246 [2024-07-25 07:20:47.763260] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:15.811 [2024-07-25 07:20:48.143555] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.811 passed 00:15:15.811 Test: admin_create_io_sq_shared_cq ...[2024-07-25 07:20:48.227931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.069 [2024-07-25 07:20:48.367269] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:16.069 [2024-07-25 07:20:48.404341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.069 passed 00:15:16.069 00:15:16.069 Run Summary: Type Total Ran Passed Failed Inactive 00:15:16.069 suites 1 1 n/a 0 0 00:15:16.069 tests 18 18 18 0 0 00:15:16.069 asserts 360 360 360 0 n/a 00:15:16.069 00:15:16.069 Elapsed time = 1.569 seconds 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2455169 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2455169 ']' 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2455169 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2455169 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2455169' 00:15:16.069 killing process with pid 2455169 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2455169 00:15:16.069 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2455169 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:16.328 00:15:16.328 real 0m5.784s 00:15:16.328 user 0m16.188s 00:15:16.328 sys 0m0.538s 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.328 ************************************ 00:15:16.328 END TEST nvmf_vfio_user_nvme_compliance 00:15:16.328 ************************************ 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.328 ************************************ 00:15:16.328 START TEST nvmf_vfio_user_fuzz 00:15:16.328 ************************************ 00:15:16.328 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:16.586 * Looking for test storage... 00:15:16.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2455918 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2455918' 00:15:16.586 Process pid: 2455918 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2455918 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2455918 ']' 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.586 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:16.845 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.845 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:16.845 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:17.778 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:17.778 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.778 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.779 malloc0 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:17.779 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:49.837 Fuzzing completed. Shutting down the fuzz application 00:15:49.838 00:15:49.838 Dumping successful admin opcodes: 00:15:49.838 8, 9, 10, 24, 00:15:49.838 Dumping successful io opcodes: 00:15:49.838 0, 00:15:49.838 NS: 0x200003a1ef00 I/O qp, Total commands completed: 710801, total successful commands: 2769, random_seed: 2505534592 00:15:49.838 NS: 0x200003a1ef00 admin qp, Total commands completed: 90748, total successful commands: 729, random_seed: 3209973184 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2455918 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2455918 ']' 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2455918 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2455918 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2455918' 00:15:49.838 killing process with pid 2455918 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2455918 00:15:49.838 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2455918 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:49.838 00:15:49.838 real 0m32.335s 00:15:49.838 user 0m33.855s 00:15:49.838 sys 0m27.175s 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.838 ************************************ 00:15:49.838 END TEST nvmf_vfio_user_fuzz 00:15:49.838 ************************************ 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.838 ************************************ 00:15:49.838 START TEST nvmf_auth_target 00:15:49.838 ************************************ 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:49.838 * Looking for test storage... 00:15:49.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.838 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.839 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.839 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.839 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.839 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.839 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.839 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.806 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:50.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:50.807 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:50.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:50.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:15:50.807 00:15:50.807 --- 10.0.0.2 ping statistics --- 00:15:50.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.807 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:15:50.807 00:15:50.807 --- 10.0.0.1 ping statistics --- 00:15:50.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.807 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.807 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2461858 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2461858 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2461858 ']' 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.074 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2461884 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:51.332 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4772c75b49d720a2a38e185362914922903809bf44194b86 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vYP 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4772c75b49d720a2a38e185362914922903809bf44194b86 0 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4772c75b49d720a2a38e185362914922903809bf44194b86 0 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4772c75b49d720a2a38e185362914922903809bf44194b86 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vYP 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vYP 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.vYP 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=08a8392510933a0d20ddb2d3f4672b0235a9bc145e8ff8082e33f31bc2f34ad3 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.97X 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 08a8392510933a0d20ddb2d3f4672b0235a9bc145e8ff8082e33f31bc2f34ad3 3 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 08a8392510933a0d20ddb2d3f4672b0235a9bc145e8ff8082e33f31bc2f34ad3 3 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=08a8392510933a0d20ddb2d3f4672b0235a9bc145e8ff8082e33f31bc2f34ad3 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.97X 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.97X 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.97X 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c7d71cca55fd64adee7708e9408f1117 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9Wv 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c7d71cca55fd64adee7708e9408f1117 1 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c7d71cca55fd64adee7708e9408f1117 1 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c7d71cca55fd64adee7708e9408f1117 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:51.333 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9Wv 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9Wv 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.9Wv 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b7b376ccaca963411c975caef700982ea8bd11a0891bb104 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Lf8 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b7b376ccaca963411c975caef700982ea8bd11a0891bb104 2 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b7b376ccaca963411c975caef700982ea8bd11a0891bb104 2 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b7b376ccaca963411c975caef700982ea8bd11a0891bb104 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Lf8 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Lf8 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Lf8 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b18f9f4a9b3eda3c6968c8c71ac230ca528835bee01de80b 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.67p 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b18f9f4a9b3eda3c6968c8c71ac230ca528835bee01de80b 2 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b18f9f4a9b3eda3c6968c8c71ac230ca528835bee01de80b 2 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b18f9f4a9b3eda3c6968c8c71ac230ca528835bee01de80b 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:51.591 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.67p 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.67p 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.67p 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8e1be8d842232331256173032239f640 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HdI 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8e1be8d842232331256173032239f640 1 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8e1be8d842232331256173032239f640 1 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8e1be8d842232331256173032239f640 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:51.592 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HdI 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HdI 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.HdI 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=577f25b2af61f1b40cd1126343d384d566f69bff74fb7cfbf1024a473d5349e1 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.f0W 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 577f25b2af61f1b40cd1126343d384d566f69bff74fb7cfbf1024a473d5349e1 3 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 577f25b2af61f1b40cd1126343d384d566f69bff74fb7cfbf1024a473d5349e1 3 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=577f25b2af61f1b40cd1126343d384d566f69bff74fb7cfbf1024a473d5349e1 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.f0W 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.f0W 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.f0W 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2461858 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2461858 ']' 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.592 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2461884 /var/tmp/host.sock 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2461884 ']' 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:51.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.850 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vYP 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vYP 00:15:52.108 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vYP 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.97X ]] 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97X 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97X 00:15:52.365 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97X 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9Wv 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9Wv 00:15:52.622 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9Wv 00:15:52.879 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Lf8 ]] 00:15:52.879 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lf8 00:15:52.879 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.879 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.879 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.880 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lf8 00:15:52.880 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lf8 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.67p 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.67p 00:15:53.137 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.67p 00:15:53.394 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.HdI ]] 00:15:53.394 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HdI 00:15:53.394 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.394 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.395 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.395 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HdI 00:15:53.395 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HdI 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.f0W 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.f0W 00:15:53.652 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.f0W 00:15:53.910 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:53.910 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:53.910 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.910 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.910 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.910 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.167 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.425 00:15:54.425 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.425 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.425 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.682 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.682 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.682 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.682 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.682 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.682 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.682 { 00:15:54.682 "cntlid": 1, 00:15:54.683 "qid": 0, 00:15:54.683 "state": "enabled", 00:15:54.683 "thread": "nvmf_tgt_poll_group_000", 00:15:54.683 "listen_address": { 00:15:54.683 "trtype": "TCP", 00:15:54.683 "adrfam": "IPv4", 00:15:54.683 "traddr": "10.0.0.2", 00:15:54.683 "trsvcid": "4420" 00:15:54.683 }, 00:15:54.683 "peer_address": { 00:15:54.683 "trtype": "TCP", 00:15:54.683 "adrfam": "IPv4", 00:15:54.683 "traddr": "10.0.0.1", 00:15:54.683 "trsvcid": "60300" 00:15:54.683 }, 00:15:54.683 "auth": { 00:15:54.683 "state": "completed", 00:15:54.683 "digest": "sha256", 00:15:54.683 "dhgroup": "null" 00:15:54.683 } 00:15:54.683 } 00:15:54.683 ]' 00:15:54.683 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.941 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.199 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.131 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.389 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.647 00:15:56.647 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.647 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.647 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.905 { 00:15:56.905 "cntlid": 3, 00:15:56.905 "qid": 0, 00:15:56.905 "state": "enabled", 00:15:56.905 "thread": "nvmf_tgt_poll_group_000", 00:15:56.905 "listen_address": { 00:15:56.905 "trtype": "TCP", 00:15:56.905 "adrfam": "IPv4", 00:15:56.905 "traddr": "10.0.0.2", 00:15:56.905 "trsvcid": "4420" 00:15:56.905 }, 00:15:56.905 "peer_address": { 00:15:56.905 "trtype": "TCP", 00:15:56.905 "adrfam": "IPv4", 00:15:56.905 "traddr": "10.0.0.1", 00:15:56.905 "trsvcid": "60324" 00:15:56.905 }, 00:15:56.905 "auth": { 00:15:56.905 "state": "completed", 00:15:56.905 "digest": "sha256", 00:15:56.905 "dhgroup": "null" 00:15:56.905 } 00:15:56.905 } 00:15:56.905 ]' 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.905 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.163 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:57.163 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.163 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.163 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.163 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.421 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.611 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.869 00:15:58.869 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.869 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.869 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.127 { 00:15:59.127 "cntlid": 5, 00:15:59.127 "qid": 0, 00:15:59.127 "state": "enabled", 00:15:59.127 "thread": "nvmf_tgt_poll_group_000", 00:15:59.127 "listen_address": { 00:15:59.127 "trtype": "TCP", 00:15:59.127 "adrfam": "IPv4", 00:15:59.127 "traddr": "10.0.0.2", 00:15:59.127 "trsvcid": "4420" 00:15:59.127 }, 00:15:59.127 "peer_address": { 00:15:59.127 "trtype": "TCP", 00:15:59.127 "adrfam": "IPv4", 00:15:59.127 "traddr": "10.0.0.1", 00:15:59.127 "trsvcid": "44484" 00:15:59.127 }, 00:15:59.127 "auth": { 00:15:59.127 "state": "completed", 00:15:59.127 "digest": "sha256", 00:15:59.127 "dhgroup": "null" 00:15:59.127 } 00:15:59.127 } 00:15:59.127 ]' 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:59.127 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.385 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.385 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.385 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.643 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.577 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.834 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:00.834 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.835 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.091 00:16:01.091 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.091 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.091 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.348 { 00:16:01.348 "cntlid": 7, 00:16:01.348 "qid": 0, 00:16:01.348 "state": "enabled", 00:16:01.348 "thread": "nvmf_tgt_poll_group_000", 00:16:01.348 "listen_address": { 00:16:01.348 "trtype": "TCP", 00:16:01.348 "adrfam": "IPv4", 00:16:01.348 "traddr": "10.0.0.2", 00:16:01.348 "trsvcid": "4420" 00:16:01.348 }, 00:16:01.348 "peer_address": { 00:16:01.348 "trtype": "TCP", 00:16:01.348 "adrfam": "IPv4", 00:16:01.348 "traddr": "10.0.0.1", 00:16:01.348 "trsvcid": "44518" 00:16:01.348 }, 00:16:01.348 "auth": { 00:16:01.348 "state": "completed", 00:16:01.348 "digest": "sha256", 00:16:01.348 "dhgroup": "null" 00:16:01.348 } 00:16:01.348 } 00:16:01.348 ]' 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.348 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.605 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.978 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.236 00:16:03.236 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.236 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.236 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.494 { 00:16:03.494 "cntlid": 9, 00:16:03.494 "qid": 0, 00:16:03.494 "state": "enabled", 00:16:03.494 "thread": "nvmf_tgt_poll_group_000", 00:16:03.494 "listen_address": { 00:16:03.494 "trtype": "TCP", 00:16:03.494 "adrfam": "IPv4", 00:16:03.494 "traddr": "10.0.0.2", 00:16:03.494 "trsvcid": "4420" 00:16:03.494 }, 00:16:03.494 "peer_address": { 00:16:03.494 "trtype": "TCP", 00:16:03.494 "adrfam": "IPv4", 00:16:03.494 "traddr": "10.0.0.1", 00:16:03.494 "trsvcid": "44556" 00:16:03.494 }, 00:16:03.494 "auth": { 00:16:03.494 "state": "completed", 00:16:03.494 "digest": "sha256", 00:16:03.494 "dhgroup": "ffdhe2048" 00:16:03.494 } 00:16:03.494 } 00:16:03.494 ]' 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.494 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.765 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.765 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.765 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.765 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.765 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.050 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.982 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.240 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.498 00:16:05.498 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.498 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.498 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.756 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.756 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.756 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.756 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.756 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.756 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.756 { 00:16:05.756 "cntlid": 11, 00:16:05.756 "qid": 0, 00:16:05.756 "state": "enabled", 00:16:05.756 "thread": "nvmf_tgt_poll_group_000", 00:16:05.756 "listen_address": { 00:16:05.756 "trtype": "TCP", 00:16:05.756 "adrfam": "IPv4", 00:16:05.756 "traddr": "10.0.0.2", 00:16:05.756 "trsvcid": "4420" 00:16:05.756 }, 00:16:05.756 "peer_address": { 00:16:05.756 "trtype": "TCP", 00:16:05.756 "adrfam": "IPv4", 00:16:05.756 "traddr": "10.0.0.1", 00:16:05.756 "trsvcid": "44584" 00:16:05.756 }, 00:16:05.756 "auth": { 00:16:05.756 "state": "completed", 00:16:05.757 "digest": "sha256", 00:16:05.757 "dhgroup": "ffdhe2048" 00:16:05.757 } 00:16:05.757 } 00:16:05.757 ]' 00:16:05.757 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.757 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.757 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.757 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.757 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.014 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.014 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.014 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.014 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.388 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.645 00:16:07.645 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.645 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.645 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.903 { 00:16:07.903 "cntlid": 13, 00:16:07.903 "qid": 0, 00:16:07.903 "state": "enabled", 00:16:07.903 "thread": "nvmf_tgt_poll_group_000", 00:16:07.903 "listen_address": { 00:16:07.903 "trtype": "TCP", 00:16:07.903 "adrfam": "IPv4", 00:16:07.903 "traddr": "10.0.0.2", 00:16:07.903 "trsvcid": "4420" 00:16:07.903 }, 00:16:07.903 "peer_address": { 00:16:07.903 "trtype": "TCP", 00:16:07.903 "adrfam": "IPv4", 00:16:07.903 "traddr": "10.0.0.1", 00:16:07.903 "trsvcid": "59462" 00:16:07.903 }, 00:16:07.903 "auth": { 00:16:07.903 "state": "completed", 00:16:07.903 "digest": "sha256", 00:16:07.903 "dhgroup": "ffdhe2048" 00:16:07.903 } 00:16:07.903 } 00:16:07.903 ]' 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.903 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.161 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.161 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.161 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.161 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.161 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.418 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.349 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.607 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.607 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.607 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.864 00:16:09.864 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.864 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.864 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.122 { 00:16:10.122 "cntlid": 15, 00:16:10.122 "qid": 0, 00:16:10.122 "state": "enabled", 00:16:10.122 "thread": "nvmf_tgt_poll_group_000", 00:16:10.122 "listen_address": { 00:16:10.122 "trtype": "TCP", 00:16:10.122 "adrfam": "IPv4", 00:16:10.122 "traddr": "10.0.0.2", 00:16:10.122 "trsvcid": "4420" 00:16:10.122 }, 00:16:10.122 "peer_address": { 00:16:10.122 "trtype": "TCP", 00:16:10.122 "adrfam": "IPv4", 00:16:10.122 "traddr": "10.0.0.1", 00:16:10.122 "trsvcid": "59498" 00:16:10.122 }, 00:16:10.122 "auth": { 00:16:10.122 "state": "completed", 00:16:10.122 "digest": "sha256", 00:16:10.122 "dhgroup": "ffdhe2048" 00:16:10.122 } 00:16:10.122 } 00:16:10.122 ]' 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.122 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.380 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.380 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.380 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.380 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.380 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.637 07:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.570 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.827 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.084 00:16:12.084 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.084 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.084 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.341 { 00:16:12.341 "cntlid": 17, 00:16:12.341 "qid": 0, 00:16:12.341 "state": "enabled", 00:16:12.341 "thread": "nvmf_tgt_poll_group_000", 00:16:12.341 "listen_address": { 00:16:12.341 "trtype": "TCP", 00:16:12.341 "adrfam": "IPv4", 00:16:12.341 "traddr": "10.0.0.2", 00:16:12.341 "trsvcid": "4420" 00:16:12.341 }, 00:16:12.341 "peer_address": { 00:16:12.341 "trtype": "TCP", 00:16:12.341 "adrfam": "IPv4", 00:16:12.341 "traddr": "10.0.0.1", 00:16:12.341 "trsvcid": "59520" 00:16:12.341 }, 00:16:12.341 "auth": { 00:16:12.341 "state": "completed", 00:16:12.341 "digest": "sha256", 00:16:12.341 "dhgroup": "ffdhe3072" 00:16:12.341 } 00:16:12.341 } 00:16:12.341 ]' 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.341 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.598 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.598 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.598 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.855 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:13.789 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.047 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.305 00:16:14.305 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.305 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.305 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.563 { 00:16:14.563 "cntlid": 19, 00:16:14.563 "qid": 0, 00:16:14.563 "state": "enabled", 00:16:14.563 "thread": "nvmf_tgt_poll_group_000", 00:16:14.563 "listen_address": { 00:16:14.563 "trtype": "TCP", 00:16:14.563 "adrfam": "IPv4", 00:16:14.563 "traddr": "10.0.0.2", 00:16:14.563 "trsvcid": "4420" 00:16:14.563 }, 00:16:14.563 "peer_address": { 00:16:14.563 "trtype": "TCP", 00:16:14.563 "adrfam": "IPv4", 00:16:14.563 "traddr": "10.0.0.1", 00:16:14.563 "trsvcid": "59542" 00:16:14.563 }, 00:16:14.563 "auth": { 00:16:14.563 "state": "completed", 00:16:14.563 "digest": "sha256", 00:16:14.563 "dhgroup": "ffdhe3072" 00:16:14.563 } 00:16:14.563 } 00:16:14.563 ]' 00:16:14.563 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.563 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.821 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.193 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.451 00:16:16.451 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.451 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.451 07:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.708 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.708 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.708 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.708 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.708 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.708 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.708 { 00:16:16.708 "cntlid": 21, 00:16:16.708 "qid": 0, 00:16:16.708 "state": "enabled", 00:16:16.708 "thread": "nvmf_tgt_poll_group_000", 00:16:16.708 "listen_address": { 00:16:16.708 "trtype": "TCP", 00:16:16.708 "adrfam": "IPv4", 00:16:16.708 "traddr": "10.0.0.2", 00:16:16.709 "trsvcid": "4420" 00:16:16.709 }, 00:16:16.709 "peer_address": { 00:16:16.709 "trtype": "TCP", 00:16:16.709 "adrfam": "IPv4", 00:16:16.709 "traddr": "10.0.0.1", 00:16:16.709 "trsvcid": "59574" 00:16:16.709 }, 00:16:16.709 "auth": { 00:16:16.709 "state": "completed", 00:16:16.709 "digest": "sha256", 00:16:16.709 "dhgroup": "ffdhe3072" 00:16:16.709 } 00:16:16.709 } 00:16:16.709 ]' 00:16:16.709 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.966 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.223 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.218 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.475 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.732 00:16:18.732 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.732 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.732 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.989 { 00:16:18.989 "cntlid": 23, 00:16:18.989 "qid": 0, 00:16:18.989 "state": "enabled", 00:16:18.989 "thread": "nvmf_tgt_poll_group_000", 00:16:18.989 "listen_address": { 00:16:18.989 "trtype": "TCP", 00:16:18.989 "adrfam": "IPv4", 00:16:18.989 "traddr": "10.0.0.2", 00:16:18.989 "trsvcid": "4420" 00:16:18.989 }, 00:16:18.989 "peer_address": { 00:16:18.989 "trtype": "TCP", 00:16:18.989 "adrfam": "IPv4", 00:16:18.989 "traddr": "10.0.0.1", 00:16:18.989 "trsvcid": "55304" 00:16:18.989 }, 00:16:18.989 "auth": { 00:16:18.989 "state": "completed", 00:16:18.989 "digest": "sha256", 00:16:18.989 "dhgroup": "ffdhe3072" 00:16:18.989 } 00:16:18.989 } 00:16:18.989 ]' 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.989 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.247 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.247 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.247 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.504 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.435 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.693 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.951 00:16:20.951 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.951 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.951 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.208 { 00:16:21.208 "cntlid": 25, 00:16:21.208 "qid": 0, 00:16:21.208 "state": "enabled", 00:16:21.208 "thread": "nvmf_tgt_poll_group_000", 00:16:21.208 "listen_address": { 00:16:21.208 "trtype": "TCP", 00:16:21.208 "adrfam": "IPv4", 00:16:21.208 "traddr": "10.0.0.2", 00:16:21.208 "trsvcid": "4420" 00:16:21.208 }, 00:16:21.208 "peer_address": { 00:16:21.208 "trtype": "TCP", 00:16:21.208 "adrfam": "IPv4", 00:16:21.208 "traddr": "10.0.0.1", 00:16:21.208 "trsvcid": "55326" 00:16:21.208 }, 00:16:21.208 "auth": { 00:16:21.208 "state": "completed", 00:16:21.208 "digest": "sha256", 00:16:21.208 "dhgroup": "ffdhe4096" 00:16:21.208 } 00:16:21.208 } 00:16:21.208 ]' 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.208 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.465 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.465 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.465 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.465 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.465 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.722 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:16:22.655 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.655 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.656 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.656 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.656 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.656 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.656 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.656 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.914 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.171 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.430 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.688 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.688 { 00:16:23.688 "cntlid": 27, 00:16:23.688 "qid": 0, 00:16:23.688 "state": "enabled", 00:16:23.688 "thread": "nvmf_tgt_poll_group_000", 00:16:23.688 "listen_address": { 00:16:23.688 "trtype": "TCP", 00:16:23.688 "adrfam": "IPv4", 00:16:23.688 "traddr": "10.0.0.2", 00:16:23.688 "trsvcid": "4420" 00:16:23.688 }, 00:16:23.688 "peer_address": { 00:16:23.688 "trtype": "TCP", 00:16:23.688 "adrfam": "IPv4", 00:16:23.688 "traddr": "10.0.0.1", 00:16:23.688 "trsvcid": "55352" 00:16:23.688 }, 00:16:23.688 "auth": { 00:16:23.688 "state": "completed", 00:16:23.688 "digest": "sha256", 00:16:23.688 "dhgroup": "ffdhe4096" 00:16:23.688 } 00:16:23.688 } 00:16:23.688 ]' 00:16:23.688 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.688 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.945 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.878 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.135 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.700 00:16:25.700 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.700 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.700 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.958 { 00:16:25.958 "cntlid": 29, 00:16:25.958 "qid": 0, 00:16:25.958 "state": "enabled", 00:16:25.958 "thread": "nvmf_tgt_poll_group_000", 00:16:25.958 "listen_address": { 00:16:25.958 "trtype": "TCP", 00:16:25.958 "adrfam": "IPv4", 00:16:25.958 "traddr": "10.0.0.2", 00:16:25.958 "trsvcid": "4420" 00:16:25.958 }, 00:16:25.958 "peer_address": { 00:16:25.958 "trtype": "TCP", 00:16:25.958 "adrfam": "IPv4", 00:16:25.958 "traddr": "10.0.0.1", 00:16:25.958 "trsvcid": "55366" 00:16:25.958 }, 00:16:25.958 "auth": { 00:16:25.958 "state": "completed", 00:16:25.958 "digest": "sha256", 00:16:25.958 "dhgroup": "ffdhe4096" 00:16:25.958 } 00:16:25.958 } 00:16:25.958 ]' 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.958 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.216 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:27.149 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.149 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.149 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.149 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.149 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.150 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.150 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.408 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.973 00:16:27.973 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.973 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.973 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.231 { 00:16:28.231 "cntlid": 31, 00:16:28.231 "qid": 0, 00:16:28.231 "state": "enabled", 00:16:28.231 "thread": "nvmf_tgt_poll_group_000", 00:16:28.231 "listen_address": { 00:16:28.231 "trtype": "TCP", 00:16:28.231 "adrfam": "IPv4", 00:16:28.231 "traddr": "10.0.0.2", 00:16:28.231 "trsvcid": "4420" 00:16:28.231 }, 00:16:28.231 "peer_address": { 00:16:28.231 "trtype": "TCP", 00:16:28.231 "adrfam": "IPv4", 00:16:28.231 "traddr": "10.0.0.1", 00:16:28.231 "trsvcid": "43980" 00:16:28.231 }, 00:16:28.231 "auth": { 00:16:28.231 "state": "completed", 00:16:28.231 "digest": "sha256", 00:16:28.231 "dhgroup": "ffdhe4096" 00:16:28.231 } 00:16:28.231 } 00:16:28.231 ]' 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.231 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.489 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.420 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.678 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.241 00:16:30.241 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.241 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.241 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.499 { 00:16:30.499 "cntlid": 33, 00:16:30.499 "qid": 0, 00:16:30.499 "state": "enabled", 00:16:30.499 "thread": "nvmf_tgt_poll_group_000", 00:16:30.499 "listen_address": { 00:16:30.499 "trtype": "TCP", 00:16:30.499 "adrfam": "IPv4", 00:16:30.499 "traddr": "10.0.0.2", 00:16:30.499 "trsvcid": "4420" 00:16:30.499 }, 00:16:30.499 "peer_address": { 00:16:30.499 "trtype": "TCP", 00:16:30.499 "adrfam": "IPv4", 00:16:30.499 "traddr": "10.0.0.1", 00:16:30.499 "trsvcid": "44006" 00:16:30.499 }, 00:16:30.499 "auth": { 00:16:30.499 "state": "completed", 00:16:30.499 "digest": "sha256", 00:16:30.499 "dhgroup": "ffdhe6144" 00:16:30.499 } 00:16:30.499 } 00:16:30.499 ]' 00:16:30.499 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.756 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.012 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.003 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.004 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.262 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.828 00:16:32.828 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.828 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.828 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.086 { 00:16:33.086 "cntlid": 35, 00:16:33.086 "qid": 0, 00:16:33.086 "state": "enabled", 00:16:33.086 "thread": "nvmf_tgt_poll_group_000", 00:16:33.086 "listen_address": { 00:16:33.086 "trtype": "TCP", 00:16:33.086 "adrfam": "IPv4", 00:16:33.086 "traddr": "10.0.0.2", 00:16:33.086 "trsvcid": "4420" 00:16:33.086 }, 00:16:33.086 "peer_address": { 00:16:33.086 "trtype": "TCP", 00:16:33.086 "adrfam": "IPv4", 00:16:33.086 "traddr": "10.0.0.1", 00:16:33.086 "trsvcid": "44020" 00:16:33.086 }, 00:16:33.086 "auth": { 00:16:33.086 "state": "completed", 00:16:33.086 "digest": "sha256", 00:16:33.086 "dhgroup": "ffdhe6144" 00:16:33.086 } 00:16:33.086 } 00:16:33.086 ]' 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.086 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.344 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.716 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.716 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.281 00:16:35.281 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.281 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.281 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.538 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.538 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.538 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.539 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.539 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.539 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.539 { 00:16:35.539 "cntlid": 37, 00:16:35.539 "qid": 0, 00:16:35.539 "state": "enabled", 00:16:35.539 "thread": "nvmf_tgt_poll_group_000", 00:16:35.539 "listen_address": { 00:16:35.539 "trtype": "TCP", 00:16:35.539 "adrfam": "IPv4", 00:16:35.539 "traddr": "10.0.0.2", 00:16:35.539 "trsvcid": "4420" 00:16:35.539 }, 00:16:35.539 "peer_address": { 00:16:35.539 "trtype": "TCP", 00:16:35.539 "adrfam": "IPv4", 00:16:35.539 "traddr": "10.0.0.1", 00:16:35.539 "trsvcid": "44052" 00:16:35.539 }, 00:16:35.539 "auth": { 00:16:35.539 "state": "completed", 00:16:35.539 "digest": "sha256", 00:16:35.539 "dhgroup": "ffdhe6144" 00:16:35.539 } 00:16:35.539 } 00:16:35.539 ]' 00:16:35.539 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.539 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.539 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.539 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.539 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.796 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.796 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.796 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.054 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:36.986 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.987 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.244 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.810 00:16:37.810 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.810 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.810 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.067 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.067 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.067 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.067 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.067 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.067 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.067 { 00:16:38.067 "cntlid": 39, 00:16:38.067 "qid": 0, 00:16:38.067 "state": "enabled", 00:16:38.067 "thread": "nvmf_tgt_poll_group_000", 00:16:38.067 "listen_address": { 00:16:38.067 "trtype": "TCP", 00:16:38.067 "adrfam": "IPv4", 00:16:38.067 "traddr": "10.0.0.2", 00:16:38.067 "trsvcid": "4420" 00:16:38.067 }, 00:16:38.067 "peer_address": { 00:16:38.067 "trtype": "TCP", 00:16:38.068 "adrfam": "IPv4", 00:16:38.068 "traddr": "10.0.0.1", 00:16:38.068 "trsvcid": "46608" 00:16:38.068 }, 00:16:38.068 "auth": { 00:16:38.068 "state": "completed", 00:16:38.068 "digest": "sha256", 00:16:38.068 "dhgroup": "ffdhe6144" 00:16:38.068 } 00:16:38.068 } 00:16:38.068 ]' 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.068 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.325 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.698 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.698 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.632 00:16:40.632 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.632 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.632 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.890 { 00:16:40.890 "cntlid": 41, 00:16:40.890 "qid": 0, 00:16:40.890 "state": "enabled", 00:16:40.890 "thread": "nvmf_tgt_poll_group_000", 00:16:40.890 "listen_address": { 00:16:40.890 "trtype": "TCP", 00:16:40.890 "adrfam": "IPv4", 00:16:40.890 "traddr": "10.0.0.2", 00:16:40.890 "trsvcid": "4420" 00:16:40.890 }, 00:16:40.890 "peer_address": { 00:16:40.890 "trtype": "TCP", 00:16:40.890 "adrfam": "IPv4", 00:16:40.890 "traddr": "10.0.0.1", 00:16:40.890 "trsvcid": "46632" 00:16:40.890 }, 00:16:40.890 "auth": { 00:16:40.890 "state": "completed", 00:16:40.890 "digest": "sha256", 00:16:40.890 "dhgroup": "ffdhe8192" 00:16:40.890 } 00:16:40.890 } 00:16:40.890 ]' 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.890 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.147 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.147 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.147 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.147 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.147 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.405 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.338 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.595 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.527 00:16:43.527 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.527 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.527 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.785 { 00:16:43.785 "cntlid": 43, 00:16:43.785 "qid": 0, 00:16:43.785 "state": "enabled", 00:16:43.785 "thread": "nvmf_tgt_poll_group_000", 00:16:43.785 "listen_address": { 00:16:43.785 "trtype": "TCP", 00:16:43.785 "adrfam": "IPv4", 00:16:43.785 "traddr": "10.0.0.2", 00:16:43.785 "trsvcid": "4420" 00:16:43.785 }, 00:16:43.785 "peer_address": { 00:16:43.785 "trtype": "TCP", 00:16:43.785 "adrfam": "IPv4", 00:16:43.785 "traddr": "10.0.0.1", 00:16:43.785 "trsvcid": "46654" 00:16:43.785 }, 00:16:43.785 "auth": { 00:16:43.785 "state": "completed", 00:16:43.785 "digest": "sha256", 00:16:43.785 "dhgroup": "ffdhe8192" 00:16:43.785 } 00:16:43.785 } 00:16:43.785 ]' 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.785 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.043 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.974 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.560 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.511 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.511 { 00:16:46.511 "cntlid": 45, 00:16:46.511 "qid": 0, 00:16:46.511 "state": "enabled", 00:16:46.511 "thread": "nvmf_tgt_poll_group_000", 00:16:46.511 "listen_address": { 00:16:46.511 "trtype": "TCP", 00:16:46.511 "adrfam": "IPv4", 00:16:46.511 "traddr": "10.0.0.2", 00:16:46.511 "trsvcid": "4420" 00:16:46.511 }, 00:16:46.511 "peer_address": { 00:16:46.511 "trtype": "TCP", 00:16:46.511 "adrfam": "IPv4", 00:16:46.511 "traddr": "10.0.0.1", 00:16:46.511 "trsvcid": "46690" 00:16:46.511 }, 00:16:46.511 "auth": { 00:16:46.511 "state": "completed", 00:16:46.511 "digest": "sha256", 00:16:46.511 "dhgroup": "ffdhe8192" 00:16:46.511 } 00:16:46.511 } 00:16:46.511 ]' 00:16:46.511 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.511 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.511 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.769 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.769 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.769 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.769 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.769 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.027 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.961 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.219 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.151 00:16:49.151 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.151 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.151 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.409 { 00:16:49.409 "cntlid": 47, 00:16:49.409 "qid": 0, 00:16:49.409 "state": "enabled", 00:16:49.409 "thread": "nvmf_tgt_poll_group_000", 00:16:49.409 "listen_address": { 00:16:49.409 "trtype": "TCP", 00:16:49.409 "adrfam": "IPv4", 00:16:49.409 "traddr": "10.0.0.2", 00:16:49.409 "trsvcid": "4420" 00:16:49.409 }, 00:16:49.409 "peer_address": { 00:16:49.409 "trtype": "TCP", 00:16:49.409 "adrfam": "IPv4", 00:16:49.409 "traddr": "10.0.0.1", 00:16:49.409 "trsvcid": "58828" 00:16:49.409 }, 00:16:49.409 "auth": { 00:16:49.409 "state": "completed", 00:16:49.409 "digest": "sha256", 00:16:49.409 "dhgroup": "ffdhe8192" 00:16:49.409 } 00:16:49.409 } 00:16:49.409 ]' 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.409 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.410 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.410 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.410 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.410 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.410 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.666 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.600 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.165 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.422 00:16:51.422 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.422 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.422 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.680 { 00:16:51.680 "cntlid": 49, 00:16:51.680 "qid": 0, 00:16:51.680 "state": "enabled", 00:16:51.680 "thread": "nvmf_tgt_poll_group_000", 00:16:51.680 "listen_address": { 00:16:51.680 "trtype": "TCP", 00:16:51.680 "adrfam": "IPv4", 00:16:51.680 "traddr": "10.0.0.2", 00:16:51.680 "trsvcid": "4420" 00:16:51.680 }, 00:16:51.680 "peer_address": { 00:16:51.680 "trtype": "TCP", 00:16:51.680 "adrfam": "IPv4", 00:16:51.680 "traddr": "10.0.0.1", 00:16:51.680 "trsvcid": "58848" 00:16:51.680 }, 00:16:51.680 "auth": { 00:16:51.680 "state": "completed", 00:16:51.680 "digest": "sha384", 00:16:51.680 "dhgroup": "null" 00:16:51.680 } 00:16:51.680 } 00:16:51.680 ]' 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.680 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.938 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.311 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.569 00:16:53.569 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.569 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.569 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.827 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.827 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.827 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.827 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.085 { 00:16:54.085 "cntlid": 51, 00:16:54.085 "qid": 0, 00:16:54.085 "state": "enabled", 00:16:54.085 "thread": "nvmf_tgt_poll_group_000", 00:16:54.085 "listen_address": { 00:16:54.085 "trtype": "TCP", 00:16:54.085 "adrfam": "IPv4", 00:16:54.085 "traddr": "10.0.0.2", 00:16:54.085 "trsvcid": "4420" 00:16:54.085 }, 00:16:54.085 "peer_address": { 00:16:54.085 "trtype": "TCP", 00:16:54.085 "adrfam": "IPv4", 00:16:54.085 "traddr": "10.0.0.1", 00:16:54.085 "trsvcid": "58888" 00:16:54.085 }, 00:16:54.085 "auth": { 00:16:54.085 "state": "completed", 00:16:54.085 "digest": "sha384", 00:16:54.085 "dhgroup": "null" 00:16:54.085 } 00:16:54.085 } 00:16:54.085 ]' 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.085 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.342 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.275 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.533 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.098 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.098 { 00:16:56.098 "cntlid": 53, 00:16:56.098 "qid": 0, 00:16:56.098 "state": "enabled", 00:16:56.098 "thread": "nvmf_tgt_poll_group_000", 00:16:56.098 "listen_address": { 00:16:56.098 "trtype": "TCP", 00:16:56.098 "adrfam": "IPv4", 00:16:56.098 "traddr": "10.0.0.2", 00:16:56.098 "trsvcid": "4420" 00:16:56.098 }, 00:16:56.098 "peer_address": { 00:16:56.098 "trtype": "TCP", 00:16:56.098 "adrfam": "IPv4", 00:16:56.098 "traddr": "10.0.0.1", 00:16:56.098 "trsvcid": "58926" 00:16:56.098 }, 00:16:56.098 "auth": { 00:16:56.098 "state": "completed", 00:16:56.098 "digest": "sha384", 00:16:56.098 "dhgroup": "null" 00:16:56.098 } 00:16:56.098 } 00:16:56.098 ]' 00:16:56.098 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.356 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.613 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:16:57.547 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.547 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.547 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.547 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.547 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.547 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.547 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.547 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.804 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:57.805 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.805 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.805 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.805 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.805 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.063 00:16:58.063 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.063 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.063 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.320 { 00:16:58.320 "cntlid": 55, 00:16:58.320 "qid": 0, 00:16:58.320 "state": "enabled", 00:16:58.320 "thread": "nvmf_tgt_poll_group_000", 00:16:58.320 "listen_address": { 00:16:58.320 "trtype": "TCP", 00:16:58.320 "adrfam": "IPv4", 00:16:58.320 "traddr": "10.0.0.2", 00:16:58.320 "trsvcid": "4420" 00:16:58.320 }, 00:16:58.320 "peer_address": { 00:16:58.320 "trtype": "TCP", 00:16:58.320 "adrfam": "IPv4", 00:16:58.320 "traddr": "10.0.0.1", 00:16:58.320 "trsvcid": "58378" 00:16:58.320 }, 00:16:58.320 "auth": { 00:16:58.320 "state": "completed", 00:16:58.320 "digest": "sha384", 00:16:58.320 "dhgroup": "null" 00:16:58.320 } 00:16:58.320 } 00:16:58.320 ]' 00:16:58.320 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.578 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.836 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.819 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.820 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.820 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.077 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.078 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.335 00:17:00.335 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.335 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.335 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.593 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.593 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.593 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.593 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.593 { 00:17:00.593 "cntlid": 57, 00:17:00.593 "qid": 0, 00:17:00.593 "state": "enabled", 00:17:00.593 "thread": "nvmf_tgt_poll_group_000", 00:17:00.593 "listen_address": { 00:17:00.593 "trtype": "TCP", 00:17:00.593 "adrfam": "IPv4", 00:17:00.593 "traddr": "10.0.0.2", 00:17:00.593 "trsvcid": "4420" 00:17:00.593 }, 00:17:00.593 "peer_address": { 00:17:00.593 "trtype": "TCP", 00:17:00.593 "adrfam": "IPv4", 00:17:00.593 "traddr": "10.0.0.1", 00:17:00.593 "trsvcid": "58412" 00:17:00.593 }, 00:17:00.593 "auth": { 00:17:00.593 "state": "completed", 00:17:00.593 "digest": "sha384", 00:17:00.593 "dhgroup": "ffdhe2048" 00:17:00.593 } 00:17:00.593 } 00:17:00.593 ]' 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.593 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.851 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:01.783 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.783 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.783 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.783 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.040 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.040 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.041 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.041 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.298 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.299 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.557 00:17:02.557 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.557 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.557 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.815 { 00:17:02.815 "cntlid": 59, 00:17:02.815 "qid": 0, 00:17:02.815 "state": "enabled", 00:17:02.815 "thread": "nvmf_tgt_poll_group_000", 00:17:02.815 "listen_address": { 00:17:02.815 "trtype": "TCP", 00:17:02.815 "adrfam": "IPv4", 00:17:02.815 "traddr": "10.0.0.2", 00:17:02.815 "trsvcid": "4420" 00:17:02.815 }, 00:17:02.815 "peer_address": { 00:17:02.815 "trtype": "TCP", 00:17:02.815 "adrfam": "IPv4", 00:17:02.815 "traddr": "10.0.0.1", 00:17:02.815 "trsvcid": "58440" 00:17:02.815 }, 00:17:02.815 "auth": { 00:17:02.815 "state": "completed", 00:17:02.815 "digest": "sha384", 00:17:02.815 "dhgroup": "ffdhe2048" 00:17:02.815 } 00:17:02.815 } 00:17:02.815 ]' 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.815 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.072 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.006 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.264 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.830 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.830 { 00:17:04.830 "cntlid": 61, 00:17:04.830 "qid": 0, 00:17:04.830 "state": "enabled", 00:17:04.830 "thread": "nvmf_tgt_poll_group_000", 00:17:04.830 "listen_address": { 00:17:04.830 "trtype": "TCP", 00:17:04.830 "adrfam": "IPv4", 00:17:04.830 "traddr": "10.0.0.2", 00:17:04.830 "trsvcid": "4420" 00:17:04.830 }, 00:17:04.830 "peer_address": { 00:17:04.830 "trtype": "TCP", 00:17:04.830 "adrfam": "IPv4", 00:17:04.830 "traddr": "10.0.0.1", 00:17:04.830 "trsvcid": "58462" 00:17:04.830 }, 00:17:04.830 "auth": { 00:17:04.830 "state": "completed", 00:17:04.830 "digest": "sha384", 00:17:04.830 "dhgroup": "ffdhe2048" 00:17:04.830 } 00:17:04.830 } 00:17:04.830 ]' 00:17:04.830 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.088 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.346 07:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.279 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.536 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.537 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.537 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.794 00:17:06.794 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.794 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.794 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.052 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.053 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.053 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.053 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.053 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.053 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.053 { 00:17:07.053 "cntlid": 63, 00:17:07.053 "qid": 0, 00:17:07.053 "state": "enabled", 00:17:07.053 "thread": "nvmf_tgt_poll_group_000", 00:17:07.053 "listen_address": { 00:17:07.053 "trtype": "TCP", 00:17:07.053 "adrfam": "IPv4", 00:17:07.053 "traddr": "10.0.0.2", 00:17:07.053 "trsvcid": "4420" 00:17:07.053 }, 00:17:07.053 "peer_address": { 00:17:07.053 "trtype": "TCP", 00:17:07.053 "adrfam": "IPv4", 00:17:07.053 "traddr": "10.0.0.1", 00:17:07.053 "trsvcid": "58482" 00:17:07.053 }, 00:17:07.053 "auth": { 00:17:07.053 "state": "completed", 00:17:07.053 "digest": "sha384", 00:17:07.053 "dhgroup": "ffdhe2048" 00:17:07.053 } 00:17:07.053 } 00:17:07.053 ]' 00:17:07.053 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.311 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.568 07:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.501 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.759 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.018 00:17:09.018 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.018 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.018 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.276 { 00:17:09.276 "cntlid": 65, 00:17:09.276 "qid": 0, 00:17:09.276 "state": "enabled", 00:17:09.276 "thread": "nvmf_tgt_poll_group_000", 00:17:09.276 "listen_address": { 00:17:09.276 "trtype": "TCP", 00:17:09.276 "adrfam": "IPv4", 00:17:09.276 "traddr": "10.0.0.2", 00:17:09.276 "trsvcid": "4420" 00:17:09.276 }, 00:17:09.276 "peer_address": { 00:17:09.276 "trtype": "TCP", 00:17:09.276 "adrfam": "IPv4", 00:17:09.276 "traddr": "10.0.0.1", 00:17:09.276 "trsvcid": "57746" 00:17:09.276 }, 00:17:09.276 "auth": { 00:17:09.276 "state": "completed", 00:17:09.276 "digest": "sha384", 00:17:09.276 "dhgroup": "ffdhe3072" 00:17:09.276 } 00:17:09.276 } 00:17:09.276 ]' 00:17:09.276 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.534 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.792 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.725 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.986 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.245 00:17:11.245 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.245 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.245 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.503 { 00:17:11.503 "cntlid": 67, 00:17:11.503 "qid": 0, 00:17:11.503 "state": "enabled", 00:17:11.503 "thread": "nvmf_tgt_poll_group_000", 00:17:11.503 "listen_address": { 00:17:11.503 "trtype": "TCP", 00:17:11.503 "adrfam": "IPv4", 00:17:11.503 "traddr": "10.0.0.2", 00:17:11.503 "trsvcid": "4420" 00:17:11.503 }, 00:17:11.503 "peer_address": { 00:17:11.503 "trtype": "TCP", 00:17:11.503 "adrfam": "IPv4", 00:17:11.503 "traddr": "10.0.0.1", 00:17:11.503 "trsvcid": "57754" 00:17:11.503 }, 00:17:11.503 "auth": { 00:17:11.503 "state": "completed", 00:17:11.503 "digest": "sha384", 00:17:11.503 "dhgroup": "ffdhe3072" 00:17:11.503 } 00:17:11.503 } 00:17:11.503 ]' 00:17:11.503 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.503 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.503 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.761 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.761 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.761 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.761 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.761 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.018 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.950 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.238 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.500 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.758 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.015 { 00:17:14.015 "cntlid": 69, 00:17:14.015 "qid": 0, 00:17:14.015 "state": "enabled", 00:17:14.015 "thread": "nvmf_tgt_poll_group_000", 00:17:14.015 "listen_address": { 00:17:14.015 "trtype": "TCP", 00:17:14.015 "adrfam": "IPv4", 00:17:14.015 "traddr": "10.0.0.2", 00:17:14.015 "trsvcid": "4420" 00:17:14.015 }, 00:17:14.015 "peer_address": { 00:17:14.015 "trtype": "TCP", 00:17:14.015 "adrfam": "IPv4", 00:17:14.015 "traddr": "10.0.0.1", 00:17:14.015 "trsvcid": "57788" 00:17:14.015 }, 00:17:14.015 "auth": { 00:17:14.015 "state": "completed", 00:17:14.015 "digest": "sha384", 00:17:14.015 "dhgroup": "ffdhe3072" 00:17:14.015 } 00:17:14.015 } 00:17:14.015 ]' 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.015 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.272 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.204 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.462 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.720 00:17:15.720 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.720 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.720 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.978 { 00:17:15.978 "cntlid": 71, 00:17:15.978 "qid": 0, 00:17:15.978 "state": "enabled", 00:17:15.978 "thread": "nvmf_tgt_poll_group_000", 00:17:15.978 "listen_address": { 00:17:15.978 "trtype": "TCP", 00:17:15.978 "adrfam": "IPv4", 00:17:15.978 "traddr": "10.0.0.2", 00:17:15.978 "trsvcid": "4420" 00:17:15.978 }, 00:17:15.978 "peer_address": { 00:17:15.978 "trtype": "TCP", 00:17:15.978 "adrfam": "IPv4", 00:17:15.978 "traddr": "10.0.0.1", 00:17:15.978 "trsvcid": "57824" 00:17:15.978 }, 00:17:15.978 "auth": { 00:17:15.978 "state": "completed", 00:17:15.978 "digest": "sha384", 00:17:15.978 "dhgroup": "ffdhe3072" 00:17:15.978 } 00:17:15.978 } 00:17:15.978 ]' 00:17:15.978 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.235 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.493 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.426 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.684 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.249 00:17:18.249 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.249 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.249 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.507 { 00:17:18.507 "cntlid": 73, 00:17:18.507 "qid": 0, 00:17:18.507 "state": "enabled", 00:17:18.507 "thread": "nvmf_tgt_poll_group_000", 00:17:18.507 "listen_address": { 00:17:18.507 "trtype": "TCP", 00:17:18.507 "adrfam": "IPv4", 00:17:18.507 "traddr": "10.0.0.2", 00:17:18.507 "trsvcid": "4420" 00:17:18.507 }, 00:17:18.507 "peer_address": { 00:17:18.507 "trtype": "TCP", 00:17:18.507 "adrfam": "IPv4", 00:17:18.507 "traddr": "10.0.0.1", 00:17:18.507 "trsvcid": "41690" 00:17:18.507 }, 00:17:18.507 "auth": { 00:17:18.507 "state": "completed", 00:17:18.507 "digest": "sha384", 00:17:18.507 "dhgroup": "ffdhe4096" 00:17:18.507 } 00:17:18.507 } 00:17:18.507 ]' 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.507 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.765 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.698 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.955 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.520 00:17:20.520 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.520 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.520 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.777 { 00:17:20.777 "cntlid": 75, 00:17:20.777 "qid": 0, 00:17:20.777 "state": "enabled", 00:17:20.777 "thread": "nvmf_tgt_poll_group_000", 00:17:20.777 "listen_address": { 00:17:20.777 "trtype": "TCP", 00:17:20.777 "adrfam": "IPv4", 00:17:20.777 "traddr": "10.0.0.2", 00:17:20.777 "trsvcid": "4420" 00:17:20.777 }, 00:17:20.777 "peer_address": { 00:17:20.777 "trtype": "TCP", 00:17:20.777 "adrfam": "IPv4", 00:17:20.777 "traddr": "10.0.0.1", 00:17:20.777 "trsvcid": "41712" 00:17:20.777 }, 00:17:20.777 "auth": { 00:17:20.777 "state": "completed", 00:17:20.777 "digest": "sha384", 00:17:20.777 "dhgroup": "ffdhe4096" 00:17:20.777 } 00:17:20.777 } 00:17:20.777 ]' 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.777 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.033 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:17:21.965 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.222 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.480 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.737 00:17:22.737 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.737 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.737 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.994 { 00:17:22.994 "cntlid": 77, 00:17:22.994 "qid": 0, 00:17:22.994 "state": "enabled", 00:17:22.994 "thread": "nvmf_tgt_poll_group_000", 00:17:22.994 "listen_address": { 00:17:22.994 "trtype": "TCP", 00:17:22.994 "adrfam": "IPv4", 00:17:22.994 "traddr": "10.0.0.2", 00:17:22.994 "trsvcid": "4420" 00:17:22.994 }, 00:17:22.994 "peer_address": { 00:17:22.994 "trtype": "TCP", 00:17:22.994 "adrfam": "IPv4", 00:17:22.994 "traddr": "10.0.0.1", 00:17:22.994 "trsvcid": "41748" 00:17:22.994 }, 00:17:22.994 "auth": { 00:17:22.994 "state": "completed", 00:17:22.994 "digest": "sha384", 00:17:22.994 "dhgroup": "ffdhe4096" 00:17:22.994 } 00:17:22.994 } 00:17:22.994 ]' 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.994 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.251 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.251 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.251 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.509 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.440 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.697 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:24.697 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.698 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.956 00:17:24.956 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.956 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.956 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.213 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.213 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.213 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.213 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.213 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.213 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.213 { 00:17:25.214 "cntlid": 79, 00:17:25.214 "qid": 0, 00:17:25.214 "state": "enabled", 00:17:25.214 "thread": "nvmf_tgt_poll_group_000", 00:17:25.214 "listen_address": { 00:17:25.214 "trtype": "TCP", 00:17:25.214 "adrfam": "IPv4", 00:17:25.214 "traddr": "10.0.0.2", 00:17:25.214 "trsvcid": "4420" 00:17:25.214 }, 00:17:25.214 "peer_address": { 00:17:25.214 "trtype": "TCP", 00:17:25.214 "adrfam": "IPv4", 00:17:25.214 "traddr": "10.0.0.1", 00:17:25.214 "trsvcid": "41772" 00:17:25.214 }, 00:17:25.214 "auth": { 00:17:25.214 "state": "completed", 00:17:25.214 "digest": "sha384", 00:17:25.214 "dhgroup": "ffdhe4096" 00:17:25.214 } 00:17:25.214 } 00:17:25.214 ]' 00:17:25.214 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.214 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.214 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.214 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.214 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.471 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.472 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.472 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.728 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.661 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.957 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.546 00:17:27.546 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.546 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.546 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.804 { 00:17:27.804 "cntlid": 81, 00:17:27.804 "qid": 0, 00:17:27.804 "state": "enabled", 00:17:27.804 "thread": "nvmf_tgt_poll_group_000", 00:17:27.804 "listen_address": { 00:17:27.804 "trtype": "TCP", 00:17:27.804 "adrfam": "IPv4", 00:17:27.804 "traddr": "10.0.0.2", 00:17:27.804 "trsvcid": "4420" 00:17:27.804 }, 00:17:27.804 "peer_address": { 00:17:27.804 "trtype": "TCP", 00:17:27.804 "adrfam": "IPv4", 00:17:27.804 "traddr": "10.0.0.1", 00:17:27.804 "trsvcid": "56478" 00:17:27.804 }, 00:17:27.804 "auth": { 00:17:27.804 "state": "completed", 00:17:27.804 "digest": "sha384", 00:17:27.804 "dhgroup": "ffdhe6144" 00:17:27.804 } 00:17:27.804 } 00:17:27.804 ]' 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.804 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.062 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.994 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.251 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:29.251 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.251 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.251 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.252 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.816 00:17:29.816 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.816 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.816 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.074 { 00:17:30.074 "cntlid": 83, 00:17:30.074 "qid": 0, 00:17:30.074 "state": "enabled", 00:17:30.074 "thread": "nvmf_tgt_poll_group_000", 00:17:30.074 "listen_address": { 00:17:30.074 "trtype": "TCP", 00:17:30.074 "adrfam": "IPv4", 00:17:30.074 "traddr": "10.0.0.2", 00:17:30.074 "trsvcid": "4420" 00:17:30.074 }, 00:17:30.074 "peer_address": { 00:17:30.074 "trtype": "TCP", 00:17:30.074 "adrfam": "IPv4", 00:17:30.074 "traddr": "10.0.0.1", 00:17:30.074 "trsvcid": "56508" 00:17:30.074 }, 00:17:30.074 "auth": { 00:17:30.074 "state": "completed", 00:17:30.074 "digest": "sha384", 00:17:30.074 "dhgroup": "ffdhe6144" 00:17:30.074 } 00:17:30.074 } 00:17:30.074 ]' 00:17:30.074 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.332 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.589 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:17:31.521 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.521 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.521 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.522 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.522 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.522 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.522 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.522 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.780 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.345 00:17:32.345 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.345 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.345 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.603 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.603 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.603 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.603 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.603 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.604 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.604 { 00:17:32.604 "cntlid": 85, 00:17:32.604 "qid": 0, 00:17:32.604 "state": "enabled", 00:17:32.604 "thread": "nvmf_tgt_poll_group_000", 00:17:32.604 "listen_address": { 00:17:32.604 "trtype": "TCP", 00:17:32.604 "adrfam": "IPv4", 00:17:32.604 "traddr": "10.0.0.2", 00:17:32.604 "trsvcid": "4420" 00:17:32.604 }, 00:17:32.604 "peer_address": { 00:17:32.604 "trtype": "TCP", 00:17:32.604 "adrfam": "IPv4", 00:17:32.604 "traddr": "10.0.0.1", 00:17:32.604 "trsvcid": "56522" 00:17:32.604 }, 00:17:32.604 "auth": { 00:17:32.604 "state": "completed", 00:17:32.604 "digest": "sha384", 00:17:32.604 "dhgroup": "ffdhe6144" 00:17:32.604 } 00:17:32.604 } 00:17:32.604 ]' 00:17:32.604 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.604 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.604 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.604 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.604 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.861 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.861 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.861 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.119 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.052 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.310 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:34.311 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.311 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.311 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.311 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.311 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.876 00:17:34.876 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.876 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.876 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.134 { 00:17:35.134 "cntlid": 87, 00:17:35.134 "qid": 0, 00:17:35.134 "state": "enabled", 00:17:35.134 "thread": "nvmf_tgt_poll_group_000", 00:17:35.134 "listen_address": { 00:17:35.134 "trtype": "TCP", 00:17:35.134 "adrfam": "IPv4", 00:17:35.134 "traddr": "10.0.0.2", 00:17:35.134 "trsvcid": "4420" 00:17:35.134 }, 00:17:35.134 "peer_address": { 00:17:35.134 "trtype": "TCP", 00:17:35.134 "adrfam": "IPv4", 00:17:35.134 "traddr": "10.0.0.1", 00:17:35.134 "trsvcid": "56550" 00:17:35.134 }, 00:17:35.134 "auth": { 00:17:35.134 "state": "completed", 00:17:35.134 "digest": "sha384", 00:17:35.134 "dhgroup": "ffdhe6144" 00:17:35.134 } 00:17:35.134 } 00:17:35.134 ]' 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.134 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.392 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.324 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.583 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.516 00:17:37.516 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.516 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.516 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.774 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.774 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.774 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.774 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.774 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.774 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.774 { 00:17:37.774 "cntlid": 89, 00:17:37.775 "qid": 0, 00:17:37.775 "state": "enabled", 00:17:37.775 "thread": "nvmf_tgt_poll_group_000", 00:17:37.775 "listen_address": { 00:17:37.775 "trtype": "TCP", 00:17:37.775 "adrfam": "IPv4", 00:17:37.775 "traddr": "10.0.0.2", 00:17:37.775 "trsvcid": "4420" 00:17:37.775 }, 00:17:37.775 "peer_address": { 00:17:37.775 "trtype": "TCP", 00:17:37.775 "adrfam": "IPv4", 00:17:37.775 "traddr": "10.0.0.1", 00:17:37.775 "trsvcid": "56562" 00:17:37.775 }, 00:17:37.775 "auth": { 00:17:37.775 "state": "completed", 00:17:37.775 "digest": "sha384", 00:17:37.775 "dhgroup": "ffdhe8192" 00:17:37.775 } 00:17:37.775 } 00:17:37.775 ]' 00:17:37.775 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.775 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.775 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.032 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.032 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.032 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.032 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.032 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.289 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.222 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.479 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.411 00:17:40.411 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.411 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.411 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.669 { 00:17:40.669 "cntlid": 91, 00:17:40.669 "qid": 0, 00:17:40.669 "state": "enabled", 00:17:40.669 "thread": "nvmf_tgt_poll_group_000", 00:17:40.669 "listen_address": { 00:17:40.669 "trtype": "TCP", 00:17:40.669 "adrfam": "IPv4", 00:17:40.669 "traddr": "10.0.0.2", 00:17:40.669 "trsvcid": "4420" 00:17:40.669 }, 00:17:40.669 "peer_address": { 00:17:40.669 "trtype": "TCP", 00:17:40.669 "adrfam": "IPv4", 00:17:40.669 "traddr": "10.0.0.1", 00:17:40.669 "trsvcid": "34586" 00:17:40.669 }, 00:17:40.669 "auth": { 00:17:40.669 "state": "completed", 00:17:40.669 "digest": "sha384", 00:17:40.669 "dhgroup": "ffdhe8192" 00:17:40.669 } 00:17:40.669 } 00:17:40.669 ]' 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.669 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.957 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:17:41.892 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.892 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.892 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.892 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.892 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.168 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.168 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.168 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.425 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.356 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.356 { 00:17:43.356 "cntlid": 93, 00:17:43.356 "qid": 0, 00:17:43.356 "state": "enabled", 00:17:43.356 "thread": "nvmf_tgt_poll_group_000", 00:17:43.356 "listen_address": { 00:17:43.356 "trtype": "TCP", 00:17:43.356 "adrfam": "IPv4", 00:17:43.356 "traddr": "10.0.0.2", 00:17:43.356 "trsvcid": "4420" 00:17:43.356 }, 00:17:43.356 "peer_address": { 00:17:43.356 "trtype": "TCP", 00:17:43.356 "adrfam": "IPv4", 00:17:43.356 "traddr": "10.0.0.1", 00:17:43.356 "trsvcid": "34612" 00:17:43.356 }, 00:17:43.356 "auth": { 00:17:43.356 "state": "completed", 00:17:43.356 "digest": "sha384", 00:17:43.356 "dhgroup": "ffdhe8192" 00:17:43.356 } 00:17:43.356 } 00:17:43.356 ]' 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.356 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.614 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.614 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.614 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.614 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.614 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.872 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.805 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.063 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.064 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.064 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.997 00:17:45.997 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.997 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.997 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.254 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.254 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.254 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.255 { 00:17:46.255 "cntlid": 95, 00:17:46.255 "qid": 0, 00:17:46.255 "state": "enabled", 00:17:46.255 "thread": "nvmf_tgt_poll_group_000", 00:17:46.255 "listen_address": { 00:17:46.255 "trtype": "TCP", 00:17:46.255 "adrfam": "IPv4", 00:17:46.255 "traddr": "10.0.0.2", 00:17:46.255 "trsvcid": "4420" 00:17:46.255 }, 00:17:46.255 "peer_address": { 00:17:46.255 "trtype": "TCP", 00:17:46.255 "adrfam": "IPv4", 00:17:46.255 "traddr": "10.0.0.1", 00:17:46.255 "trsvcid": "34636" 00:17:46.255 }, 00:17:46.255 "auth": { 00:17:46.255 "state": "completed", 00:17:46.255 "digest": "sha384", 00:17:46.255 "dhgroup": "ffdhe8192" 00:17:46.255 } 00:17:46.255 } 00:17:46.255 ]' 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.255 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.820 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:17:47.754 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.754 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.754 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.754 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.754 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.754 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:47.754 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.754 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.754 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.754 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.012 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:48.012 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.012 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.012 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.013 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.271 00:17:48.271 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.271 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.271 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.529 { 00:17:48.529 "cntlid": 97, 00:17:48.529 "qid": 0, 00:17:48.529 "state": "enabled", 00:17:48.529 "thread": "nvmf_tgt_poll_group_000", 00:17:48.529 "listen_address": { 00:17:48.529 "trtype": "TCP", 00:17:48.529 "adrfam": "IPv4", 00:17:48.529 "traddr": "10.0.0.2", 00:17:48.529 "trsvcid": "4420" 00:17:48.529 }, 00:17:48.529 "peer_address": { 00:17:48.529 "trtype": "TCP", 00:17:48.529 "adrfam": "IPv4", 00:17:48.529 "traddr": "10.0.0.1", 00:17:48.529 "trsvcid": "39906" 00:17:48.529 }, 00:17:48.529 "auth": { 00:17:48.529 "state": "completed", 00:17:48.529 "digest": "sha512", 00:17:48.529 "dhgroup": "null" 00:17:48.529 } 00:17:48.529 } 00:17:48.529 ]' 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.529 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.529 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.529 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.529 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.787 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.161 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.419 00:17:50.419 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.419 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.419 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.677 { 00:17:50.677 "cntlid": 99, 00:17:50.677 "qid": 0, 00:17:50.677 "state": "enabled", 00:17:50.677 "thread": "nvmf_tgt_poll_group_000", 00:17:50.677 "listen_address": { 00:17:50.677 "trtype": "TCP", 00:17:50.677 "adrfam": "IPv4", 00:17:50.677 "traddr": "10.0.0.2", 00:17:50.677 "trsvcid": "4420" 00:17:50.677 }, 00:17:50.677 "peer_address": { 00:17:50.677 "trtype": "TCP", 00:17:50.677 "adrfam": "IPv4", 00:17:50.677 "traddr": "10.0.0.1", 00:17:50.677 "trsvcid": "39930" 00:17:50.677 }, 00:17:50.677 "auth": { 00:17:50.677 "state": "completed", 00:17:50.677 "digest": "sha512", 00:17:50.677 "dhgroup": "null" 00:17:50.677 } 00:17:50.677 } 00:17:50.677 ]' 00:17:50.677 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.934 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.191 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.124 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.382 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.640 00:17:52.640 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.640 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.640 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.898 { 00:17:52.898 "cntlid": 101, 00:17:52.898 "qid": 0, 00:17:52.898 "state": "enabled", 00:17:52.898 "thread": "nvmf_tgt_poll_group_000", 00:17:52.898 "listen_address": { 00:17:52.898 "trtype": "TCP", 00:17:52.898 "adrfam": "IPv4", 00:17:52.898 "traddr": "10.0.0.2", 00:17:52.898 "trsvcid": "4420" 00:17:52.898 }, 00:17:52.898 "peer_address": { 00:17:52.898 "trtype": "TCP", 00:17:52.898 "adrfam": "IPv4", 00:17:52.898 "traddr": "10.0.0.1", 00:17:52.898 "trsvcid": "39958" 00:17:52.898 }, 00:17:52.898 "auth": { 00:17:52.898 "state": "completed", 00:17:52.898 "digest": "sha512", 00:17:52.898 "dhgroup": "null" 00:17:52.898 } 00:17:52.898 } 00:17:52.898 ]' 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.898 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.156 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.156 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.156 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.413 07:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.349 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.607 07:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.864 00:17:54.864 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.864 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.864 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.163 { 00:17:55.163 "cntlid": 103, 00:17:55.163 "qid": 0, 00:17:55.163 "state": "enabled", 00:17:55.163 "thread": "nvmf_tgt_poll_group_000", 00:17:55.163 "listen_address": { 00:17:55.163 "trtype": "TCP", 00:17:55.163 "adrfam": "IPv4", 00:17:55.163 "traddr": "10.0.0.2", 00:17:55.163 "trsvcid": "4420" 00:17:55.163 }, 00:17:55.163 "peer_address": { 00:17:55.163 "trtype": "TCP", 00:17:55.163 "adrfam": "IPv4", 00:17:55.163 "traddr": "10.0.0.1", 00:17:55.163 "trsvcid": "39998" 00:17:55.163 }, 00:17:55.163 "auth": { 00:17:55.163 "state": "completed", 00:17:55.163 "digest": "sha512", 00:17:55.163 "dhgroup": "null" 00:17:55.163 } 00:17:55.163 } 00:17:55.163 ]' 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.163 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.421 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.354 07:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.920 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:56.920 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.920 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.920 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.920 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.920 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.921 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.921 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.921 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.921 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.921 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.921 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.178 00:17:57.178 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.178 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.178 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.436 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.436 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.436 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.436 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.436 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.436 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.436 { 00:17:57.436 "cntlid": 105, 00:17:57.436 "qid": 0, 00:17:57.436 "state": "enabled", 00:17:57.436 "thread": "nvmf_tgt_poll_group_000", 00:17:57.436 "listen_address": { 00:17:57.436 "trtype": "TCP", 00:17:57.436 "adrfam": "IPv4", 00:17:57.437 "traddr": "10.0.0.2", 00:17:57.437 "trsvcid": "4420" 00:17:57.437 }, 00:17:57.437 "peer_address": { 00:17:57.437 "trtype": "TCP", 00:17:57.437 "adrfam": "IPv4", 00:17:57.437 "traddr": "10.0.0.1", 00:17:57.437 "trsvcid": "53440" 00:17:57.437 }, 00:17:57.437 "auth": { 00:17:57.437 "state": "completed", 00:17:57.437 "digest": "sha512", 00:17:57.437 "dhgroup": "ffdhe2048" 00:17:57.437 } 00:17:57.437 } 00:17:57.437 ]' 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.437 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.695 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.629 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.887 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.145 00:17:59.401 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.401 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.401 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.659 { 00:17:59.659 "cntlid": 107, 00:17:59.659 "qid": 0, 00:17:59.659 "state": "enabled", 00:17:59.659 "thread": "nvmf_tgt_poll_group_000", 00:17:59.659 "listen_address": { 00:17:59.659 "trtype": "TCP", 00:17:59.659 "adrfam": "IPv4", 00:17:59.659 "traddr": "10.0.0.2", 00:17:59.659 "trsvcid": "4420" 00:17:59.659 }, 00:17:59.659 "peer_address": { 00:17:59.659 "trtype": "TCP", 00:17:59.659 "adrfam": "IPv4", 00:17:59.659 "traddr": "10.0.0.1", 00:17:59.659 "trsvcid": "53466" 00:17:59.659 }, 00:17:59.659 "auth": { 00:17:59.659 "state": "completed", 00:17:59.659 "digest": "sha512", 00:17:59.659 "dhgroup": "ffdhe2048" 00:17:59.659 } 00:17:59.659 } 00:17:59.659 ]' 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.659 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.659 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.659 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.659 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.659 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.659 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.916 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.848 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.106 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.364 00:18:01.622 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.622 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.622 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.879 { 00:18:01.879 "cntlid": 109, 00:18:01.879 "qid": 0, 00:18:01.879 "state": "enabled", 00:18:01.879 "thread": "nvmf_tgt_poll_group_000", 00:18:01.879 "listen_address": { 00:18:01.879 "trtype": "TCP", 00:18:01.879 "adrfam": "IPv4", 00:18:01.879 "traddr": "10.0.0.2", 00:18:01.879 "trsvcid": "4420" 00:18:01.879 }, 00:18:01.879 "peer_address": { 00:18:01.879 "trtype": "TCP", 00:18:01.879 "adrfam": "IPv4", 00:18:01.879 "traddr": "10.0.0.1", 00:18:01.879 "trsvcid": "53484" 00:18:01.879 }, 00:18:01.879 "auth": { 00:18:01.879 "state": "completed", 00:18:01.879 "digest": "sha512", 00:18:01.879 "dhgroup": "ffdhe2048" 00:18:01.879 } 00:18:01.879 } 00:18:01.879 ]' 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.879 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.880 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.137 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.068 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.325 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:03.325 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.325 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.325 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.326 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.890 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.890 { 00:18:03.890 "cntlid": 111, 00:18:03.890 "qid": 0, 00:18:03.890 "state": "enabled", 00:18:03.890 "thread": "nvmf_tgt_poll_group_000", 00:18:03.890 "listen_address": { 00:18:03.890 "trtype": "TCP", 00:18:03.890 "adrfam": "IPv4", 00:18:03.890 "traddr": "10.0.0.2", 00:18:03.890 "trsvcid": "4420" 00:18:03.890 }, 00:18:03.890 "peer_address": { 00:18:03.890 "trtype": "TCP", 00:18:03.890 "adrfam": "IPv4", 00:18:03.890 "traddr": "10.0.0.1", 00:18:03.890 "trsvcid": "53512" 00:18:03.890 }, 00:18:03.890 "auth": { 00:18:03.890 "state": "completed", 00:18:03.890 "digest": "sha512", 00:18:03.890 "dhgroup": "ffdhe2048" 00:18:03.890 } 00:18:03.890 } 00:18:03.890 ]' 00:18:03.890 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.148 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.406 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.341 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.599 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.857 00:18:05.857 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.857 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.857 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.422 { 00:18:06.422 "cntlid": 113, 00:18:06.422 "qid": 0, 00:18:06.422 "state": "enabled", 00:18:06.422 "thread": "nvmf_tgt_poll_group_000", 00:18:06.422 "listen_address": { 00:18:06.422 "trtype": "TCP", 00:18:06.422 "adrfam": "IPv4", 00:18:06.422 "traddr": "10.0.0.2", 00:18:06.422 "trsvcid": "4420" 00:18:06.422 }, 00:18:06.422 "peer_address": { 00:18:06.422 "trtype": "TCP", 00:18:06.422 "adrfam": "IPv4", 00:18:06.422 "traddr": "10.0.0.1", 00:18:06.422 "trsvcid": "53546" 00:18:06.422 }, 00:18:06.422 "auth": { 00:18:06.422 "state": "completed", 00:18:06.422 "digest": "sha512", 00:18:06.422 "dhgroup": "ffdhe3072" 00:18:06.422 } 00:18:06.422 } 00:18:06.422 ]' 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.422 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.679 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:18:07.610 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.610 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.611 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.611 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.611 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.611 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.611 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.611 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.867 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.868 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.868 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.868 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.125 00:18:08.125 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.125 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.125 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.382 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.382 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.382 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.382 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.382 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.382 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.382 { 00:18:08.382 "cntlid": 115, 00:18:08.382 "qid": 0, 00:18:08.382 "state": "enabled", 00:18:08.382 "thread": "nvmf_tgt_poll_group_000", 00:18:08.382 "listen_address": { 00:18:08.382 "trtype": "TCP", 00:18:08.382 "adrfam": "IPv4", 00:18:08.382 "traddr": "10.0.0.2", 00:18:08.382 "trsvcid": "4420" 00:18:08.382 }, 00:18:08.382 "peer_address": { 00:18:08.382 "trtype": "TCP", 00:18:08.382 "adrfam": "IPv4", 00:18:08.383 "traddr": "10.0.0.1", 00:18:08.383 "trsvcid": "56262" 00:18:08.383 }, 00:18:08.383 "auth": { 00:18:08.383 "state": "completed", 00:18:08.383 "digest": "sha512", 00:18:08.383 "dhgroup": "ffdhe3072" 00:18:08.383 } 00:18:08.383 } 00:18:08.383 ]' 00:18:08.383 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.651 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.652 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.652 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.652 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.652 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.652 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.652 07:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.981 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.914 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.172 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.432 00:18:10.432 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.432 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.432 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.690 { 00:18:10.690 "cntlid": 117, 00:18:10.690 "qid": 0, 00:18:10.690 "state": "enabled", 00:18:10.690 "thread": "nvmf_tgt_poll_group_000", 00:18:10.690 "listen_address": { 00:18:10.690 "trtype": "TCP", 00:18:10.690 "adrfam": "IPv4", 00:18:10.690 "traddr": "10.0.0.2", 00:18:10.690 "trsvcid": "4420" 00:18:10.690 }, 00:18:10.690 "peer_address": { 00:18:10.690 "trtype": "TCP", 00:18:10.690 "adrfam": "IPv4", 00:18:10.690 "traddr": "10.0.0.1", 00:18:10.690 "trsvcid": "56286" 00:18:10.690 }, 00:18:10.690 "auth": { 00:18:10.690 "state": "completed", 00:18:10.690 "digest": "sha512", 00:18:10.690 "dhgroup": "ffdhe3072" 00:18:10.690 } 00:18:10.690 } 00:18:10.690 ]' 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.690 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.949 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.949 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.949 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.207 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.138 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.396 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.654 00:18:12.654 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.654 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.654 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.912 { 00:18:12.912 "cntlid": 119, 00:18:12.912 "qid": 0, 00:18:12.912 "state": "enabled", 00:18:12.912 "thread": "nvmf_tgt_poll_group_000", 00:18:12.912 "listen_address": { 00:18:12.912 "trtype": "TCP", 00:18:12.912 "adrfam": "IPv4", 00:18:12.912 "traddr": "10.0.0.2", 00:18:12.912 "trsvcid": "4420" 00:18:12.912 }, 00:18:12.912 "peer_address": { 00:18:12.912 "trtype": "TCP", 00:18:12.912 "adrfam": "IPv4", 00:18:12.912 "traddr": "10.0.0.1", 00:18:12.912 "trsvcid": "56292" 00:18:12.912 }, 00:18:12.912 "auth": { 00:18:12.912 "state": "completed", 00:18:12.912 "digest": "sha512", 00:18:12.912 "dhgroup": "ffdhe3072" 00:18:12.912 } 00:18:12.912 } 00:18:12.912 ]' 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.912 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.170 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.170 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.170 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.427 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.360 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.617 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:14.617 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.617 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.617 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:14.617 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.617 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.618 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.618 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.618 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.618 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.618 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.875 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.132 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.132 { 00:18:15.132 "cntlid": 121, 00:18:15.132 "qid": 0, 00:18:15.132 "state": "enabled", 00:18:15.132 "thread": "nvmf_tgt_poll_group_000", 00:18:15.132 "listen_address": { 00:18:15.132 "trtype": "TCP", 00:18:15.132 "adrfam": "IPv4", 00:18:15.132 "traddr": "10.0.0.2", 00:18:15.133 "trsvcid": "4420" 00:18:15.133 }, 00:18:15.133 "peer_address": { 00:18:15.133 "trtype": "TCP", 00:18:15.133 "adrfam": "IPv4", 00:18:15.133 "traddr": "10.0.0.1", 00:18:15.133 "trsvcid": "56310" 00:18:15.133 }, 00:18:15.133 "auth": { 00:18:15.133 "state": "completed", 00:18:15.133 "digest": "sha512", 00:18:15.133 "dhgroup": "ffdhe4096" 00:18:15.133 } 00:18:15.133 } 00:18:15.133 ]' 00:18:15.133 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.390 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.648 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.581 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.839 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.405 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.405 { 00:18:17.405 "cntlid": 123, 00:18:17.405 "qid": 0, 00:18:17.405 "state": "enabled", 00:18:17.405 "thread": "nvmf_tgt_poll_group_000", 00:18:17.405 "listen_address": { 00:18:17.405 "trtype": "TCP", 00:18:17.405 "adrfam": "IPv4", 00:18:17.405 "traddr": "10.0.0.2", 00:18:17.405 "trsvcid": "4420" 00:18:17.405 }, 00:18:17.405 "peer_address": { 00:18:17.405 "trtype": "TCP", 00:18:17.405 "adrfam": "IPv4", 00:18:17.405 "traddr": "10.0.0.1", 00:18:17.405 "trsvcid": "36106" 00:18:17.405 }, 00:18:17.405 "auth": { 00:18:17.405 "state": "completed", 00:18:17.405 "digest": "sha512", 00:18:17.405 "dhgroup": "ffdhe4096" 00:18:17.405 } 00:18:17.405 } 00:18:17.405 ]' 00:18:17.405 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.663 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.663 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.663 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.663 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.663 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.663 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.663 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.921 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:18:18.854 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.854 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.854 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.855 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.855 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.855 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.855 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.855 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.114 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.679 00:18:19.679 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.679 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.679 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.679 { 00:18:19.679 "cntlid": 125, 00:18:19.679 "qid": 0, 00:18:19.679 "state": "enabled", 00:18:19.679 "thread": "nvmf_tgt_poll_group_000", 00:18:19.679 "listen_address": { 00:18:19.679 "trtype": "TCP", 00:18:19.679 "adrfam": "IPv4", 00:18:19.679 "traddr": "10.0.0.2", 00:18:19.679 "trsvcid": "4420" 00:18:19.679 }, 00:18:19.679 "peer_address": { 00:18:19.679 "trtype": "TCP", 00:18:19.679 "adrfam": "IPv4", 00:18:19.679 "traddr": "10.0.0.1", 00:18:19.679 "trsvcid": "36130" 00:18:19.679 }, 00:18:19.679 "auth": { 00:18:19.679 "state": "completed", 00:18:19.679 "digest": "sha512", 00:18:19.679 "dhgroup": "ffdhe4096" 00:18:19.679 } 00:18:19.679 } 00:18:19.679 ]' 00:18:19.679 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.937 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.194 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.126 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.383 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.640 00:18:21.641 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.641 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.641 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.898 { 00:18:21.898 "cntlid": 127, 00:18:21.898 "qid": 0, 00:18:21.898 "state": "enabled", 00:18:21.898 "thread": "nvmf_tgt_poll_group_000", 00:18:21.898 "listen_address": { 00:18:21.898 "trtype": "TCP", 00:18:21.898 "adrfam": "IPv4", 00:18:21.898 "traddr": "10.0.0.2", 00:18:21.898 "trsvcid": "4420" 00:18:21.898 }, 00:18:21.898 "peer_address": { 00:18:21.898 "trtype": "TCP", 00:18:21.898 "adrfam": "IPv4", 00:18:21.898 "traddr": "10.0.0.1", 00:18:21.898 "trsvcid": "36162" 00:18:21.898 }, 00:18:21.898 "auth": { 00:18:21.898 "state": "completed", 00:18:21.898 "digest": "sha512", 00:18:21.898 "dhgroup": "ffdhe4096" 00:18:21.898 } 00:18:21.898 } 00:18:21.898 ]' 00:18:21.898 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.156 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.441 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.374 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.631 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.197 00:18:24.197 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.197 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.197 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.455 { 00:18:24.455 "cntlid": 129, 00:18:24.455 "qid": 0, 00:18:24.455 "state": "enabled", 00:18:24.455 "thread": "nvmf_tgt_poll_group_000", 00:18:24.455 "listen_address": { 00:18:24.455 "trtype": "TCP", 00:18:24.455 "adrfam": "IPv4", 00:18:24.455 "traddr": "10.0.0.2", 00:18:24.455 "trsvcid": "4420" 00:18:24.455 }, 00:18:24.455 "peer_address": { 00:18:24.455 "trtype": "TCP", 00:18:24.455 "adrfam": "IPv4", 00:18:24.455 "traddr": "10.0.0.1", 00:18:24.455 "trsvcid": "36186" 00:18:24.455 }, 00:18:24.455 "auth": { 00:18:24.455 "state": "completed", 00:18:24.455 "digest": "sha512", 00:18:24.455 "dhgroup": "ffdhe6144" 00:18:24.455 } 00:18:24.455 } 00:18:24.455 ]' 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.455 07:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.712 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:18:25.646 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.904 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.162 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.727 00:18:26.727 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.727 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.727 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.727 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.984 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.984 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.984 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.984 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.984 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.984 { 00:18:26.984 "cntlid": 131, 00:18:26.984 "qid": 0, 00:18:26.984 "state": "enabled", 00:18:26.984 "thread": "nvmf_tgt_poll_group_000", 00:18:26.984 "listen_address": { 00:18:26.984 "trtype": "TCP", 00:18:26.984 "adrfam": "IPv4", 00:18:26.984 "traddr": "10.0.0.2", 00:18:26.984 "trsvcid": "4420" 00:18:26.984 }, 00:18:26.985 "peer_address": { 00:18:26.985 "trtype": "TCP", 00:18:26.985 "adrfam": "IPv4", 00:18:26.985 "traddr": "10.0.0.1", 00:18:26.985 "trsvcid": "36204" 00:18:26.985 }, 00:18:26.985 "auth": { 00:18:26.985 "state": "completed", 00:18:26.985 "digest": "sha512", 00:18:26.985 "dhgroup": "ffdhe6144" 00:18:26.985 } 00:18:26.985 } 00:18:26.985 ]' 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.985 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.242 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.175 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.433 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.997 00:18:28.997 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.997 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.997 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.255 { 00:18:29.255 "cntlid": 133, 00:18:29.255 "qid": 0, 00:18:29.255 "state": "enabled", 00:18:29.255 "thread": "nvmf_tgt_poll_group_000", 00:18:29.255 "listen_address": { 00:18:29.255 "trtype": "TCP", 00:18:29.255 "adrfam": "IPv4", 00:18:29.255 "traddr": "10.0.0.2", 00:18:29.255 "trsvcid": "4420" 00:18:29.255 }, 00:18:29.255 "peer_address": { 00:18:29.255 "trtype": "TCP", 00:18:29.255 "adrfam": "IPv4", 00:18:29.255 "traddr": "10.0.0.1", 00:18:29.255 "trsvcid": "50146" 00:18:29.255 }, 00:18:29.255 "auth": { 00:18:29.255 "state": "completed", 00:18:29.255 "digest": "sha512", 00:18:29.255 "dhgroup": "ffdhe6144" 00:18:29.255 } 00:18:29.255 } 00:18:29.255 ]' 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.255 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.512 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.512 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.512 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.770 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:18:30.703 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:30.703 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.961 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.526 00:18:31.526 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.526 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.526 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.784 { 00:18:31.784 "cntlid": 135, 00:18:31.784 "qid": 0, 00:18:31.784 "state": "enabled", 00:18:31.784 "thread": "nvmf_tgt_poll_group_000", 00:18:31.784 "listen_address": { 00:18:31.784 "trtype": "TCP", 00:18:31.784 "adrfam": "IPv4", 00:18:31.784 "traddr": "10.0.0.2", 00:18:31.784 "trsvcid": "4420" 00:18:31.784 }, 00:18:31.784 "peer_address": { 00:18:31.784 "trtype": "TCP", 00:18:31.784 "adrfam": "IPv4", 00:18:31.784 "traddr": "10.0.0.1", 00:18:31.784 "trsvcid": "50162" 00:18:31.784 }, 00:18:31.784 "auth": { 00:18:31.784 "state": "completed", 00:18:31.784 "digest": "sha512", 00:18:31.784 "dhgroup": "ffdhe6144" 00:18:31.784 } 00:18:31.784 } 00:18:31.784 ]' 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.784 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.041 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.973 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.231 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.163 00:18:34.163 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.163 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.163 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.421 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.421 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.421 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.421 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.679 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.679 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.679 { 00:18:34.679 "cntlid": 137, 00:18:34.679 "qid": 0, 00:18:34.679 "state": "enabled", 00:18:34.679 "thread": "nvmf_tgt_poll_group_000", 00:18:34.679 "listen_address": { 00:18:34.679 "trtype": "TCP", 00:18:34.679 "adrfam": "IPv4", 00:18:34.679 "traddr": "10.0.0.2", 00:18:34.679 "trsvcid": "4420" 00:18:34.679 }, 00:18:34.679 "peer_address": { 00:18:34.679 "trtype": "TCP", 00:18:34.679 "adrfam": "IPv4", 00:18:34.679 "traddr": "10.0.0.1", 00:18:34.679 "trsvcid": "50192" 00:18:34.679 }, 00:18:34.679 "auth": { 00:18:34.679 "state": "completed", 00:18:34.679 "digest": "sha512", 00:18:34.679 "dhgroup": "ffdhe8192" 00:18:34.679 } 00:18:34.679 } 00:18:34.679 ]' 00:18:34.679 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.679 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.679 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.679 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.679 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.679 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.679 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.679 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.936 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.869 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.126 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.116 00:18:37.116 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.116 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.116 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.373 { 00:18:37.373 "cntlid": 139, 00:18:37.373 "qid": 0, 00:18:37.373 "state": "enabled", 00:18:37.373 "thread": "nvmf_tgt_poll_group_000", 00:18:37.373 "listen_address": { 00:18:37.373 "trtype": "TCP", 00:18:37.373 "adrfam": "IPv4", 00:18:37.373 "traddr": "10.0.0.2", 00:18:37.373 "trsvcid": "4420" 00:18:37.373 }, 00:18:37.373 "peer_address": { 00:18:37.373 "trtype": "TCP", 00:18:37.373 "adrfam": "IPv4", 00:18:37.373 "traddr": "10.0.0.1", 00:18:37.373 "trsvcid": "50228" 00:18:37.373 }, 00:18:37.373 "auth": { 00:18:37.373 "state": "completed", 00:18:37.373 "digest": "sha512", 00:18:37.373 "dhgroup": "ffdhe8192" 00:18:37.373 } 00:18:37.373 } 00:18:37.373 ]' 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.373 07:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.631 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzdkNzFjY2E1NWZkNjRhZGVlNzcwOGU5NDA4ZjExMTcdWBCJ: --dhchap-ctrl-secret DHHC-1:02:YjdiMzc2Y2NhY2E5NjM0MTFjOTc1Y2FlZjcwMDk4MmVhOGJkMTFhMDg5MWJiMTA0J8z5Tw==: 00:18:38.563 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.563 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.563 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.563 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.563 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.563 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.563 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.563 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.821 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.754 00:18:39.754 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.754 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.754 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.012 { 00:18:40.012 "cntlid": 141, 00:18:40.012 "qid": 0, 00:18:40.012 "state": "enabled", 00:18:40.012 "thread": "nvmf_tgt_poll_group_000", 00:18:40.012 "listen_address": { 00:18:40.012 "trtype": "TCP", 00:18:40.012 "adrfam": "IPv4", 00:18:40.012 "traddr": "10.0.0.2", 00:18:40.012 "trsvcid": "4420" 00:18:40.012 }, 00:18:40.012 "peer_address": { 00:18:40.012 "trtype": "TCP", 00:18:40.012 "adrfam": "IPv4", 00:18:40.012 "traddr": "10.0.0.1", 00:18:40.012 "trsvcid": "50534" 00:18:40.012 }, 00:18:40.012 "auth": { 00:18:40.012 "state": "completed", 00:18:40.012 "digest": "sha512", 00:18:40.012 "dhgroup": "ffdhe8192" 00:18:40.012 } 00:18:40.012 } 00:18:40.012 ]' 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.012 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.269 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjE4ZjlmNGE5YjNlZGEzYzY5NjhjOGM3MWFjMjMwY2E1Mjg4MzViZWUwMWRlODBidueOFA==: --dhchap-ctrl-secret DHHC-1:01:OGUxYmU4ZDg0MjIzMjMzMTI1NjE3MzAzMjIzOWY2NDASf2IG: 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.654 07:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.654 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.588 00:18:42.588 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.588 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.588 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.846 { 00:18:42.846 "cntlid": 143, 00:18:42.846 "qid": 0, 00:18:42.846 "state": "enabled", 00:18:42.846 "thread": "nvmf_tgt_poll_group_000", 00:18:42.846 "listen_address": { 00:18:42.846 "trtype": "TCP", 00:18:42.846 "adrfam": "IPv4", 00:18:42.846 "traddr": "10.0.0.2", 00:18:42.846 "trsvcid": "4420" 00:18:42.846 }, 00:18:42.846 "peer_address": { 00:18:42.846 "trtype": "TCP", 00:18:42.846 "adrfam": "IPv4", 00:18:42.846 "traddr": "10.0.0.1", 00:18:42.846 "trsvcid": "50564" 00:18:42.846 }, 00:18:42.846 "auth": { 00:18:42.846 "state": "completed", 00:18:42.846 "digest": "sha512", 00:18:42.846 "dhgroup": "ffdhe8192" 00:18:42.846 } 00:18:42.846 } 00:18:42.846 ]' 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.846 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.104 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.104 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.104 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.104 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.104 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.362 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.295 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.553 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.554 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.486 00:18:45.486 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.486 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.486 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.744 { 00:18:45.744 "cntlid": 145, 00:18:45.744 "qid": 0, 00:18:45.744 "state": "enabled", 00:18:45.744 "thread": "nvmf_tgt_poll_group_000", 00:18:45.744 "listen_address": { 00:18:45.744 "trtype": "TCP", 00:18:45.744 "adrfam": "IPv4", 00:18:45.744 "traddr": "10.0.0.2", 00:18:45.744 "trsvcid": "4420" 00:18:45.744 }, 00:18:45.744 "peer_address": { 00:18:45.744 "trtype": "TCP", 00:18:45.744 "adrfam": "IPv4", 00:18:45.744 "traddr": "10.0.0.1", 00:18:45.744 "trsvcid": "50582" 00:18:45.744 }, 00:18:45.744 "auth": { 00:18:45.744 "state": "completed", 00:18:45.744 "digest": "sha512", 00:18:45.744 "dhgroup": "ffdhe8192" 00:18:45.744 } 00:18:45.744 } 00:18:45.744 ]' 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.744 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.002 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDc3MmM3NWI0OWQ3MjBhMmEzOGUxODUzNjI5MTQ5MjI5MDM4MDliZjQ0MTk0Yjg2RNltXg==: --dhchap-ctrl-secret DHHC-1:03:MDhhODM5MjUxMDkzM2EwZDIwZGRiMmQzZjQ2NzJiMDIzNWE5YmMxNDVlOGZmODA4MmUzM2YzMWJjMmYzNGFkM51zzXE=: 00:18:46.935 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.935 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.935 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.935 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.935 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.936 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:47.868 request: 00:18:47.868 { 00:18:47.868 "name": "nvme0", 00:18:47.868 "trtype": "tcp", 00:18:47.868 "traddr": "10.0.0.2", 00:18:47.868 "adrfam": "ipv4", 00:18:47.868 "trsvcid": "4420", 00:18:47.868 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:47.868 "prchk_reftag": false, 00:18:47.868 "prchk_guard": false, 00:18:47.868 "hdgst": false, 00:18:47.868 "ddgst": false, 00:18:47.868 "dhchap_key": "key2", 00:18:47.868 "method": "bdev_nvme_attach_controller", 00:18:47.868 "req_id": 1 00:18:47.868 } 00:18:47.868 Got JSON-RPC error response 00:18:47.868 response: 00:18:47.868 { 00:18:47.868 "code": -5, 00:18:47.868 "message": "Input/output error" 00:18:47.868 } 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.868 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.869 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:48.801 request: 00:18:48.801 { 00:18:48.801 "name": "nvme0", 00:18:48.801 "trtype": "tcp", 00:18:48.801 "traddr": "10.0.0.2", 00:18:48.801 "adrfam": "ipv4", 00:18:48.801 "trsvcid": "4420", 00:18:48.801 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:48.801 "prchk_reftag": false, 00:18:48.801 "prchk_guard": false, 00:18:48.801 "hdgst": false, 00:18:48.801 "ddgst": false, 00:18:48.801 "dhchap_key": "key1", 00:18:48.801 "dhchap_ctrlr_key": "ckey2", 00:18:48.801 "method": "bdev_nvme_attach_controller", 00:18:48.801 "req_id": 1 00:18:48.801 } 00:18:48.801 Got JSON-RPC error response 00:18:48.801 response: 00:18:48.801 { 00:18:48.801 "code": -5, 00:18:48.801 "message": "Input/output error" 00:18:48.801 } 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:48.801 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.802 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.802 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.735 request: 00:18:49.735 { 00:18:49.735 "name": "nvme0", 00:18:49.735 "trtype": "tcp", 00:18:49.735 "traddr": "10.0.0.2", 00:18:49.735 "adrfam": "ipv4", 00:18:49.735 "trsvcid": "4420", 00:18:49.735 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:49.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:49.735 "prchk_reftag": false, 00:18:49.735 "prchk_guard": false, 00:18:49.735 "hdgst": false, 00:18:49.735 "ddgst": false, 00:18:49.735 "dhchap_key": "key1", 00:18:49.735 "dhchap_ctrlr_key": "ckey1", 00:18:49.735 "method": "bdev_nvme_attach_controller", 00:18:49.735 "req_id": 1 00:18:49.735 } 00:18:49.735 Got JSON-RPC error response 00:18:49.735 response: 00:18:49.735 { 00:18:49.735 "code": -5, 00:18:49.735 "message": "Input/output error" 00:18:49.735 } 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2461858 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2461858 ']' 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2461858 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2461858 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2461858' 00:18:49.736 killing process with pid 2461858 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2461858 00:18:49.736 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2461858 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2484555 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2484555 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2484555 ']' 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.994 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2484555 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2484555 ']' 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.252 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.511 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.511 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:50.511 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:50.511 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.511 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.768 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.768 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:50.768 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.768 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.768 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.769 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.752 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.752 { 00:18:51.752 "cntlid": 1, 00:18:51.752 "qid": 0, 00:18:51.752 "state": "enabled", 00:18:51.752 "thread": "nvmf_tgt_poll_group_000", 00:18:51.752 "listen_address": { 00:18:51.752 "trtype": "TCP", 00:18:51.752 "adrfam": "IPv4", 00:18:51.752 "traddr": "10.0.0.2", 00:18:51.752 "trsvcid": "4420" 00:18:51.752 }, 00:18:51.752 "peer_address": { 00:18:51.752 "trtype": "TCP", 00:18:51.752 "adrfam": "IPv4", 00:18:51.752 "traddr": "10.0.0.1", 00:18:51.752 "trsvcid": "46266" 00:18:51.752 }, 00:18:51.752 "auth": { 00:18:51.752 "state": "completed", 00:18:51.752 "digest": "sha512", 00:18:51.752 "dhgroup": "ffdhe8192" 00:18:51.752 } 00:18:51.752 } 00:18:51.752 ]' 00:18:51.752 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.010 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.269 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NTc3ZjI1YjJhZjYxZjFiNDBjZDExMjYzNDNkMzg0ZDU2NmY2OWJmZjc0ZmI3Y2ZiZjEwMjRhNDczZDUzNDllMQZtW8o=: 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:53.203 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.461 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.718 request: 00:18:53.718 { 00:18:53.718 "name": "nvme0", 00:18:53.718 "trtype": "tcp", 00:18:53.718 "traddr": "10.0.0.2", 00:18:53.718 "adrfam": "ipv4", 00:18:53.718 "trsvcid": "4420", 00:18:53.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:53.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:53.718 "prchk_reftag": false, 00:18:53.718 "prchk_guard": false, 00:18:53.718 "hdgst": false, 00:18:53.718 "ddgst": false, 00:18:53.718 "dhchap_key": "key3", 00:18:53.718 "method": "bdev_nvme_attach_controller", 00:18:53.718 "req_id": 1 00:18:53.718 } 00:18:53.718 Got JSON-RPC error response 00:18:53.718 response: 00:18:53.718 { 00:18:53.718 "code": -5, 00:18:53.718 "message": "Input/output error" 00:18:53.718 } 00:18:53.718 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:53.718 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.719 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.719 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.719 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:53.719 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:53.719 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:53.719 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:53.976 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.977 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.977 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.235 request: 00:18:54.235 { 00:18:54.235 "name": "nvme0", 00:18:54.235 "trtype": "tcp", 00:18:54.235 "traddr": "10.0.0.2", 00:18:54.235 "adrfam": "ipv4", 00:18:54.235 "trsvcid": "4420", 00:18:54.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:54.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:54.235 "prchk_reftag": false, 00:18:54.235 "prchk_guard": false, 00:18:54.235 "hdgst": false, 00:18:54.235 "ddgst": false, 00:18:54.235 "dhchap_key": "key3", 00:18:54.235 "method": "bdev_nvme_attach_controller", 00:18:54.235 "req_id": 1 00:18:54.235 } 00:18:54.235 Got JSON-RPC error response 00:18:54.235 response: 00:18:54.235 { 00:18:54.235 "code": -5, 00:18:54.235 "message": "Input/output error" 00:18:54.235 } 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.235 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.493 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.751 request: 00:18:54.751 { 00:18:54.751 "name": "nvme0", 00:18:54.751 "trtype": "tcp", 00:18:54.751 "traddr": "10.0.0.2", 00:18:54.751 "adrfam": "ipv4", 00:18:54.751 "trsvcid": "4420", 00:18:54.751 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:54.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:54.751 "prchk_reftag": false, 00:18:54.751 "prchk_guard": false, 00:18:54.751 "hdgst": false, 00:18:54.751 "ddgst": false, 00:18:54.751 "dhchap_key": "key0", 00:18:54.751 "dhchap_ctrlr_key": "key1", 00:18:54.751 "method": "bdev_nvme_attach_controller", 00:18:54.751 "req_id": 1 00:18:54.751 } 00:18:54.751 Got JSON-RPC error response 00:18:54.751 response: 00:18:54.751 { 00:18:54.751 "code": -5, 00:18:54.751 "message": "Input/output error" 00:18:54.751 } 00:18:54.751 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:54.751 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.751 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.751 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.751 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:54.751 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.009 00:18:55.009 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:55.009 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.009 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:55.266 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.266 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.266 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2461884 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2461884 ']' 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2461884 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2461884 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2461884' 00:18:55.524 killing process with pid 2461884 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2461884 00:18:55.524 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2461884 00:18:56.088 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:56.088 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:56.088 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:56.088 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.089 rmmod nvme_tcp 00:18:56.089 rmmod nvme_fabrics 00:18:56.089 rmmod nvme_keyring 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2484555 ']' 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2484555 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2484555 ']' 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2484555 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2484555 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2484555' 00:18:56.089 killing process with pid 2484555 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2484555 00:18:56.089 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2484555 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.347 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vYP /tmp/spdk.key-sha256.9Wv /tmp/spdk.key-sha384.67p /tmp/spdk.key-sha512.f0W /tmp/spdk.key-sha512.97X /tmp/spdk.key-sha384.Lf8 /tmp/spdk.key-sha256.HdI '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:58.874 00:18:58.874 real 3m9.670s 00:18:58.874 user 7m21.635s 00:18:58.874 sys 0m25.027s 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.874 ************************************ 00:18:58.874 END TEST nvmf_auth_target 00:18:58.874 ************************************ 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.874 ************************************ 00:18:58.874 START TEST nvmf_bdevio_no_huge 00:18:58.874 ************************************ 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:58.874 * Looking for test storage... 00:18:58.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.874 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.874 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.875 07:24:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.774 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:00.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:00.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:00.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:00.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:00.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:19:00.775 00:19:00.775 --- 10.0.0.2 ping statistics --- 00:19:00.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.775 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:19:00.775 00:19:00.775 --- 10.0.0.1 ping statistics --- 00:19:00.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.775 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.775 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2487305 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2487305 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2487305 ']' 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.775 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.775 [2024-07-25 07:24:33.059730] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:00.775 [2024-07-25 07:24:33.059818] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:00.775 [2024-07-25 07:24:33.142374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.775 [2024-07-25 07:24:33.251551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.776 [2024-07-25 07:24:33.251611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.776 [2024-07-25 07:24:33.251629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.776 [2024-07-25 07:24:33.251641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.776 [2024-07-25 07:24:33.251651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.776 [2024-07-25 07:24:33.251707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:19:00.776 [2024-07-25 07:24:33.251768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:19:00.776 [2024-07-25 07:24:33.251833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:19:00.776 [2024-07-25 07:24:33.251836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 [2024-07-25 07:24:34.031578] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 Malloc0 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 [2024-07-25 07:24:34.069273] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.705 { 00:19:01.705 "params": { 00:19:01.705 "name": "Nvme$subsystem", 00:19:01.705 "trtype": "$TEST_TRANSPORT", 00:19:01.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.705 "adrfam": "ipv4", 00:19:01.705 "trsvcid": "$NVMF_PORT", 00:19:01.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.705 "hdgst": ${hdgst:-false}, 00:19:01.705 "ddgst": ${ddgst:-false} 00:19:01.705 }, 00:19:01.705 "method": "bdev_nvme_attach_controller" 00:19:01.705 } 00:19:01.705 EOF 00:19:01.705 )") 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:01.705 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:01.705 "params": { 00:19:01.705 "name": "Nvme1", 00:19:01.705 "trtype": "tcp", 00:19:01.705 "traddr": "10.0.0.2", 00:19:01.705 "adrfam": "ipv4", 00:19:01.705 "trsvcid": "4420", 00:19:01.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.705 "hdgst": false, 00:19:01.705 "ddgst": false 00:19:01.705 }, 00:19:01.705 "method": "bdev_nvme_attach_controller" 00:19:01.705 }' 00:19:01.705 [2024-07-25 07:24:34.112826] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:01.705 [2024-07-25 07:24:34.112922] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2487463 ] 00:19:01.705 [2024-07-25 07:24:34.176805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.963 [2024-07-25 07:24:34.293390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.963 [2024-07-25 07:24:34.293444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.963 [2024-07-25 07:24:34.293448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.220 I/O targets: 00:19:02.220 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:02.220 00:19:02.220 00:19:02.220 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.220 http://cunit.sourceforge.net/ 00:19:02.220 00:19:02.220 00:19:02.220 Suite: bdevio tests on: Nvme1n1 00:19:02.220 Test: blockdev write read block ...passed 00:19:02.220 Test: blockdev write zeroes read block ...passed 00:19:02.220 Test: blockdev write zeroes read no split ...passed 00:19:02.220 Test: blockdev write zeroes read split ...passed 00:19:02.220 Test: blockdev write zeroes read split partial ...passed 00:19:02.220 Test: blockdev reset ...[2024-07-25 07:24:34.704680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.220 [2024-07-25 07:24:34.704799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0fc0 (9): Bad file descriptor 00:19:02.220 [2024-07-25 07:24:34.734107] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.220 passed 00:19:02.220 Test: blockdev write read 8 blocks ...passed 00:19:02.220 Test: blockdev write read size > 128k ...passed 00:19:02.220 Test: blockdev write read invalid size ...passed 00:19:02.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.478 Test: blockdev write read max offset ...passed 00:19:02.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:02.478 Test: blockdev writev readv 8 blocks ...passed 00:19:02.478 Test: blockdev writev readv 30 x 1block ...passed 00:19:02.478 Test: blockdev writev readv block ...passed 00:19:02.478 Test: blockdev writev readv size > 128k ...passed 00:19:02.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.478 Test: blockdev comparev and writev ...[2024-07-25 07:24:34.950403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.950440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.950463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.950480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.950818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.950843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.950865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.950881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.951218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.951250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.951274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.951626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.951650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:02.478 [2024-07-25 07:24:34.951671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.478 [2024-07-25 07:24:34.951687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:02.478 passed 00:19:02.736 Test: blockdev nvme passthru rw ...passed 00:19:02.736 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:24:35.034542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:02.736 [2024-07-25 07:24:35.034570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:02.736 [2024-07-25 07:24:35.034746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:02.736 [2024-07-25 07:24:35.034770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:02.736 [2024-07-25 07:24:35.034951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:02.736 [2024-07-25 07:24:35.034975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:02.736 [2024-07-25 07:24:35.035148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:02.736 [2024-07-25 07:24:35.035171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:02.736 passed 00:19:02.736 Test: blockdev nvme admin passthru ...passed 00:19:02.736 Test: blockdev copy ...passed 00:19:02.736 00:19:02.736 Run Summary: Type Total Ran Passed Failed Inactive 00:19:02.736 suites 1 1 n/a 0 0 00:19:02.736 tests 23 23 23 0 0 00:19:02.736 asserts 152 152 152 0 n/a 00:19:02.736 00:19:02.736 Elapsed time = 1.185 seconds 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:02.994 rmmod nvme_tcp 00:19:02.994 rmmod nvme_fabrics 00:19:02.994 rmmod nvme_keyring 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2487305 ']' 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2487305 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2487305 ']' 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2487305 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.994 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2487305 00:19:03.252 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:03.252 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:03.252 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2487305' 00:19:03.252 killing process with pid 2487305 00:19:03.252 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2487305 00:19:03.252 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2487305 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.510 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.038 00:19:06.038 real 0m7.069s 00:19:06.038 user 0m13.444s 00:19:06.038 sys 0m2.435s 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.038 ************************************ 00:19:06.038 END TEST nvmf_bdevio_no_huge 00:19:06.038 ************************************ 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.038 ************************************ 00:19:06.038 START TEST nvmf_tls 00:19:06.038 ************************************ 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:06.038 * Looking for test storage... 00:19:06.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.038 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.039 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.039 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.039 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.039 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.988 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:07.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:07.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:07.989 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:07.989 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:19:07.989 00:19:07.989 --- 10.0.0.2 ping statistics --- 00:19:07.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.989 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:07.989 00:19:07.989 --- 10.0.0.1 ping statistics --- 00:19:07.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.989 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2489540 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2489540 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2489540 ']' 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.989 [2024-07-25 07:24:40.289618] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:07.989 [2024-07-25 07:24:40.289690] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.989 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.989 [2024-07-25 07:24:40.354530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.989 [2024-07-25 07:24:40.460431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.989 [2024-07-25 07:24:40.460486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.989 [2024-07-25 07:24:40.460500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.989 [2024-07-25 07:24:40.460512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.989 [2024-07-25 07:24:40.460521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.989 [2024-07-25 07:24:40.460546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.989 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.990 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:07.990 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.990 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.990 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.247 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.247 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:08.247 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:08.247 true 00:19:08.247 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.247 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:08.504 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:08.504 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:08.504 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:09.070 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.070 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:09.070 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:09.070 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:09.070 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:09.328 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.328 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:09.585 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:09.585 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:09.585 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.585 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:09.843 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:09.843 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:09.843 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:10.409 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.409 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:10.409 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:10.409 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:10.409 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:10.666 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.666 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:10.925 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:10.925 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:10.925 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:10.925 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:10.925 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.925 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:10.926 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:10.926 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:10.926 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.YVksELHHo5 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.vaLfKhnuVo 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.YVksELHHo5 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vaLfKhnuVo 00:19:11.184 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:11.441 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:11.699 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.YVksELHHo5 00:19:11.699 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YVksELHHo5 00:19:11.699 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.956 [2024-07-25 07:24:44.399981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.956 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:12.214 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:12.471 [2024-07-25 07:24:44.885304] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:12.471 [2024-07-25 07:24:44.885554] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.471 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.729 malloc0 00:19:12.729 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.986 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YVksELHHo5 00:19:13.244 [2024-07-25 07:24:45.614487] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:13.244 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YVksELHHo5 00:19:13.244 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.436 Initializing NVMe Controllers 00:19:25.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:25.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:25.436 Initialization complete. Launching workers. 00:19:25.436 ======================================================== 00:19:25.436 Latency(us) 00:19:25.436 Device Information : IOPS MiB/s Average min max 00:19:25.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7864.26 30.72 8140.82 1227.28 9300.75 00:19:25.436 ======================================================== 00:19:25.436 Total : 7864.26 30.72 8140.82 1227.28 9300.75 00:19:25.436 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YVksELHHo5 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YVksELHHo5' 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.436 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2491425 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2491425 /var/tmp/bdevperf.sock 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2491425 ']' 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.437 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.437 [2024-07-25 07:24:55.799096] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:25.437 [2024-07-25 07:24:55.799189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491425 ] 00:19:25.437 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.437 [2024-07-25 07:24:55.858825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.437 [2024-07-25 07:24:55.965676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.437 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.437 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.437 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YVksELHHo5 00:19:25.437 [2024-07-25 07:24:56.304712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.437 [2024-07-25 07:24:56.304881] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.437 TLSTESTn1 00:19:25.437 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:25.437 Running I/O for 10 seconds... 00:19:35.398 00:19:35.398 Latency(us) 00:19:35.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.398 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.398 Verification LBA range: start 0x0 length 0x2000 00:19:35.398 TLSTESTn1 : 10.04 2998.19 11.71 0.00 0.00 42584.68 8349.77 65244.73 00:19:35.398 =================================================================================================================== 00:19:35.398 Total : 2998.19 11.71 0.00 0.00 42584.68 8349.77 65244.73 00:19:35.398 0 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2491425 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2491425 ']' 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2491425 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2491425 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2491425' 00:19:35.398 killing process with pid 2491425 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2491425 00:19:35.398 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.398 00:19:35.398 Latency(us) 00:19:35.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.398 =================================================================================================================== 00:19:35.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.398 [2024-07-25 07:25:06.623094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2491425 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vaLfKhnuVo 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vaLfKhnuVo 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vaLfKhnuVo 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vaLfKhnuVo' 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2492737 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2492737 /var/tmp/bdevperf.sock 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2492737 ']' 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.398 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.398 [2024-07-25 07:25:06.932914] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:35.398 [2024-07-25 07:25:06.933004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492737 ] 00:19:35.398 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.398 [2024-07-25 07:25:06.990368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.398 [2024-07-25 07:25:07.092628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaLfKhnuVo 00:19:35.398 [2024-07-25 07:25:07.453526] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.398 [2024-07-25 07:25:07.453646] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:35.398 [2024-07-25 07:25:07.462652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:35.398 [2024-07-25 07:25:07.463547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339090 (107): Transport endpoint is not connected 00:19:35.398 [2024-07-25 07:25:07.464542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339090 (9): Bad file descriptor 00:19:35.398 [2024-07-25 07:25:07.465538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.398 [2024-07-25 07:25:07.465559] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:35.398 [2024-07-25 07:25:07.465576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.398 request: 00:19:35.398 { 00:19:35.398 "name": "TLSTEST", 00:19:35.398 "trtype": "tcp", 00:19:35.398 "traddr": "10.0.0.2", 00:19:35.398 "adrfam": "ipv4", 00:19:35.398 "trsvcid": "4420", 00:19:35.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.398 "prchk_reftag": false, 00:19:35.398 "prchk_guard": false, 00:19:35.398 "hdgst": false, 00:19:35.398 "ddgst": false, 00:19:35.398 "psk": "/tmp/tmp.vaLfKhnuVo", 00:19:35.398 "method": "bdev_nvme_attach_controller", 00:19:35.398 "req_id": 1 00:19:35.398 } 00:19:35.398 Got JSON-RPC error response 00:19:35.398 response: 00:19:35.398 { 00:19:35.398 "code": -5, 00:19:35.398 "message": "Input/output error" 00:19:35.398 } 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2492737 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2492737 ']' 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2492737 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.398 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492737 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492737' 00:19:35.399 killing process with pid 2492737 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2492737 00:19:35.399 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.399 00:19:35.399 Latency(us) 00:19:35.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.399 =================================================================================================================== 00:19:35.399 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.399 [2024-07-25 07:25:07.510732] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2492737 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YVksELHHo5 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YVksELHHo5 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YVksELHHo5 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YVksELHHo5' 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2492770 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2492770 /var/tmp/bdevperf.sock 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2492770 ']' 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.399 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.399 [2024-07-25 07:25:07.786659] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:35.399 [2024-07-25 07:25:07.786764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492770 ] 00:19:35.399 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.399 [2024-07-25 07:25:07.848753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.657 [2024-07-25 07:25:07.955592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.657 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.657 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.657 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.YVksELHHo5 00:19:35.915 [2024-07-25 07:25:08.284339] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.915 [2024-07-25 07:25:08.284462] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:35.915 [2024-07-25 07:25:08.294728] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:35.915 [2024-07-25 07:25:08.294774] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:35.915 [2024-07-25 07:25:08.294828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:35.915 [2024-07-25 07:25:08.295366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa48090 (107): Transport endpoint is not connected 00:19:35.915 [2024-07-25 07:25:08.296356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa48090 (9): Bad file descriptor 00:19:35.915 [2024-07-25 07:25:08.297356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.915 [2024-07-25 07:25:08.297376] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:35.915 [2024-07-25 07:25:08.297395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.915 request: 00:19:35.915 { 00:19:35.915 "name": "TLSTEST", 00:19:35.915 "trtype": "tcp", 00:19:35.915 "traddr": "10.0.0.2", 00:19:35.915 "adrfam": "ipv4", 00:19:35.915 "trsvcid": "4420", 00:19:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.915 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:35.915 "prchk_reftag": false, 00:19:35.915 "prchk_guard": false, 00:19:35.915 "hdgst": false, 00:19:35.915 "ddgst": false, 00:19:35.915 "psk": "/tmp/tmp.YVksELHHo5", 00:19:35.915 "method": "bdev_nvme_attach_controller", 00:19:35.915 "req_id": 1 00:19:35.915 } 00:19:35.915 Got JSON-RPC error response 00:19:35.915 response: 00:19:35.915 { 00:19:35.915 "code": -5, 00:19:35.915 "message": "Input/output error" 00:19:35.915 } 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2492770 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2492770 ']' 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2492770 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492770 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492770' 00:19:35.915 killing process with pid 2492770 00:19:35.915 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2492770 00:19:35.915 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.915 00:19:35.915 Latency(us) 00:19:35.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.915 =================================================================================================================== 00:19:35.915 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.916 [2024-07-25 07:25:08.348545] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.916 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2492770 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YVksELHHo5 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YVksELHHo5 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YVksELHHo5 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YVksELHHo5' 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2492899 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2492899 /var/tmp/bdevperf.sock 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2492899 ']' 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.228 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.228 [2024-07-25 07:25:08.647008] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:36.228 [2024-07-25 07:25:08.647081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492899 ] 00:19:36.228 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.228 [2024-07-25 07:25:08.705430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.485 [2024-07-25 07:25:08.812926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.485 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.485 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.485 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YVksELHHo5 00:19:36.743 [2024-07-25 07:25:09.155683] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.743 [2024-07-25 07:25:09.155783] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:36.743 [2024-07-25 07:25:09.166942] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:36.743 [2024-07-25 07:25:09.166970] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:36.743 [2024-07-25 07:25:09.167030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:36.743 [2024-07-25 07:25:09.167586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202f090 (107): Transport endpoint is not connected 00:19:36.743 [2024-07-25 07:25:09.168577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202f090 (9): Bad file descriptor 00:19:36.743 [2024-07-25 07:25:09.169576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:36.743 [2024-07-25 07:25:09.169595] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:36.743 [2024-07-25 07:25:09.169620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:36.743 request: 00:19:36.743 { 00:19:36.743 "name": "TLSTEST", 00:19:36.743 "trtype": "tcp", 00:19:36.743 "traddr": "10.0.0.2", 00:19:36.743 "adrfam": "ipv4", 00:19:36.743 "trsvcid": "4420", 00:19:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:36.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.743 "prchk_reftag": false, 00:19:36.743 "prchk_guard": false, 00:19:36.743 "hdgst": false, 00:19:36.743 "ddgst": false, 00:19:36.743 "psk": "/tmp/tmp.YVksELHHo5", 00:19:36.743 "method": "bdev_nvme_attach_controller", 00:19:36.743 "req_id": 1 00:19:36.743 } 00:19:36.743 Got JSON-RPC error response 00:19:36.743 response: 00:19:36.743 { 00:19:36.743 "code": -5, 00:19:36.743 "message": "Input/output error" 00:19:36.743 } 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2492899 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2492899 ']' 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2492899 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492899 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492899' 00:19:36.743 killing process with pid 2492899 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2492899 00:19:36.743 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.743 00:19:36.743 Latency(us) 00:19:36.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.743 =================================================================================================================== 00:19:36.743 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.743 [2024-07-25 07:25:09.211464] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:36.743 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2492899 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2493036 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2493036 /var/tmp/bdevperf.sock 00:19:37.001 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2493036 ']' 00:19:37.002 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.002 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.002 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.002 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.002 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 [2024-07-25 07:25:09.486202] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:37.002 [2024-07-25 07:25:09.486299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493036 ] 00:19:37.002 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.260 [2024-07-25 07:25:09.545705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.260 [2024-07-25 07:25:09.658457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.260 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.260 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.260 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:37.517 [2024-07-25 07:25:09.989180] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:37.517 [2024-07-25 07:25:09.990634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a397f0 (9): Bad file descriptor 00:19:37.517 [2024-07-25 07:25:09.991630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.517 [2024-07-25 07:25:09.991650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:37.517 [2024-07-25 07:25:09.991668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.517 request: 00:19:37.517 { 00:19:37.517 "name": "TLSTEST", 00:19:37.517 "trtype": "tcp", 00:19:37.517 "traddr": "10.0.0.2", 00:19:37.517 "adrfam": "ipv4", 00:19:37.517 "trsvcid": "4420", 00:19:37.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.517 "prchk_reftag": false, 00:19:37.517 "prchk_guard": false, 00:19:37.517 "hdgst": false, 00:19:37.517 "ddgst": false, 00:19:37.517 "method": "bdev_nvme_attach_controller", 00:19:37.517 "req_id": 1 00:19:37.517 } 00:19:37.517 Got JSON-RPC error response 00:19:37.517 response: 00:19:37.517 { 00:19:37.517 "code": -5, 00:19:37.517 "message": "Input/output error" 00:19:37.517 } 00:19:37.517 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2493036 00:19:37.517 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2493036 ']' 00:19:37.517 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2493036 00:19:37.517 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.517 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.517 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2493036 00:19:37.518 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:37.518 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:37.518 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2493036' 00:19:37.518 killing process with pid 2493036 00:19:37.518 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2493036 00:19:37.518 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.518 00:19:37.518 Latency(us) 00:19:37.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.518 =================================================================================================================== 00:19:37.518 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.518 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2493036 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2489540 00:19:37.775 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2489540 ']' 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2489540 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489540 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489540' 00:19:38.032 killing process with pid 2489540 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2489540 00:19:38.032 [2024-07-25 07:25:10.335137] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:38.032 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2489540 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.vPeRAgcRz1 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.vPeRAgcRz1 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2493189 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2493189 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2493189 ']' 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.290 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.290 [2024-07-25 07:25:10.731802] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:38.290 [2024-07-25 07:25:10.731899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.290 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.290 [2024-07-25 07:25:10.795701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.554 [2024-07-25 07:25:10.904574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.554 [2024-07-25 07:25:10.904631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.554 [2024-07-25 07:25:10.904657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.554 [2024-07-25 07:25:10.904668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.554 [2024-07-25 07:25:10.904679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.554 [2024-07-25 07:25:10.904705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.vPeRAgcRz1 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vPeRAgcRz1 00:19:38.554 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.814 [2024-07-25 07:25:11.325057] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.072 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:39.330 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:39.587 [2024-07-25 07:25:11.918619] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.587 [2024-07-25 07:25:11.918872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.587 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:39.845 malloc0 00:19:39.845 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:40.102 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:19:40.359 [2024-07-25 07:25:12.672506] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vPeRAgcRz1 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vPeRAgcRz1' 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2493471 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2493471 /var/tmp/bdevperf.sock 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2493471 ']' 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.359 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.359 [2024-07-25 07:25:12.730980] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:40.359 [2024-07-25 07:25:12.731064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493471 ] 00:19:40.359 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.359 [2024-07-25 07:25:12.790380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.617 [2024-07-25 07:25:12.904061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.617 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.617 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.617 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:19:40.874 [2024-07-25 07:25:13.248614] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.874 [2024-07-25 07:25:13.248730] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:40.874 TLSTESTn1 00:19:40.874 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:41.130 Running I/O for 10 seconds... 00:19:51.087 00:19:51.087 Latency(us) 00:19:51.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.087 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.087 Verification LBA range: start 0x0 length 0x2000 00:19:51.087 TLSTESTn1 : 10.04 2900.18 11.33 0.00 0.00 44030.34 6407.96 62137.84 00:19:51.088 =================================================================================================================== 00:19:51.088 Total : 2900.18 11.33 0.00 0.00 44030.34 6407.96 62137.84 00:19:51.088 0 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2493471 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2493471 ']' 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2493471 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2493471 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2493471' 00:19:51.088 killing process with pid 2493471 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2493471 00:19:51.088 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.088 00:19:51.088 Latency(us) 00:19:51.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.088 =================================================================================================================== 00:19:51.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.088 [2024-07-25 07:25:23.565331] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:51.088 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2493471 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.vPeRAgcRz1 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vPeRAgcRz1 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vPeRAgcRz1 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vPeRAgcRz1 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vPeRAgcRz1' 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2494787 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2494787 /var/tmp/bdevperf.sock 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2494787 ']' 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.345 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.602 [2024-07-25 07:25:23.877363] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:51.602 [2024-07-25 07:25:23.877454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494787 ] 00:19:51.602 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.602 [2024-07-25 07:25:23.936188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.602 [2024-07-25 07:25:24.038174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.859 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.859 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.860 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:19:52.117 [2024-07-25 07:25:24.423563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.117 [2024-07-25 07:25:24.423636] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:52.117 [2024-07-25 07:25:24.423657] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.vPeRAgcRz1 00:19:52.117 request: 00:19:52.117 { 00:19:52.117 "name": "TLSTEST", 00:19:52.117 "trtype": "tcp", 00:19:52.117 "traddr": "10.0.0.2", 00:19:52.117 "adrfam": "ipv4", 00:19:52.117 "trsvcid": "4420", 00:19:52.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.117 "prchk_reftag": false, 00:19:52.117 "prchk_guard": false, 00:19:52.117 "hdgst": false, 00:19:52.117 "ddgst": false, 00:19:52.117 "psk": "/tmp/tmp.vPeRAgcRz1", 00:19:52.117 "method": "bdev_nvme_attach_controller", 00:19:52.117 "req_id": 1 00:19:52.117 } 00:19:52.117 Got JSON-RPC error response 00:19:52.117 response: 00:19:52.117 { 00:19:52.117 "code": -1, 00:19:52.117 "message": "Operation not permitted" 00:19:52.117 } 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2494787 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2494787 ']' 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2494787 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2494787 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2494787' 00:19:52.117 killing process with pid 2494787 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2494787 00:19:52.117 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.117 00:19:52.117 Latency(us) 00:19:52.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.117 =================================================================================================================== 00:19:52.117 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.117 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2494787 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2493189 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2493189 ']' 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2493189 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2493189 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2493189' 00:19:52.375 killing process with pid 2493189 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2493189 00:19:52.375 [2024-07-25 07:25:24.762073] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:52.375 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2493189 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2494931 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2494931 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2494931 ']' 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.633 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.633 [2024-07-25 07:25:25.088016] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:52.633 [2024-07-25 07:25:25.088109] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.633 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.633 [2024-07-25 07:25:25.151742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.891 [2024-07-25 07:25:25.262399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.891 [2024-07-25 07:25:25.262463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.891 [2024-07-25 07:25:25.262489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.891 [2024-07-25 07:25:25.262503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.891 [2024-07-25 07:25:25.262516] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.891 [2024-07-25 07:25:25.262553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.vPeRAgcRz1 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vPeRAgcRz1 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.vPeRAgcRz1 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vPeRAgcRz1 00:19:52.891 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.456 [2024-07-25 07:25:25.688184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.456 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.713 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.970 [2024-07-25 07:25:26.273804] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.970 [2024-07-25 07:25:26.274068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.970 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.228 malloc0 00:19:54.228 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.486 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:19:54.486 [2024-07-25 07:25:27.011006] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:54.486 [2024-07-25 07:25:27.011049] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:54.486 [2024-07-25 07:25:27.011096] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:54.744 request: 00:19:54.744 { 00:19:54.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.744 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.744 "psk": "/tmp/tmp.vPeRAgcRz1", 00:19:54.744 "method": "nvmf_subsystem_add_host", 00:19:54.744 "req_id": 1 00:19:54.744 } 00:19:54.744 Got JSON-RPC error response 00:19:54.744 response: 00:19:54.744 { 00:19:54.744 "code": -32603, 00:19:54.744 "message": "Internal error" 00:19:54.744 } 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2494931 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2494931 ']' 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2494931 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2494931 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2494931' 00:19:54.744 killing process with pid 2494931 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2494931 00:19:54.744 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2494931 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.vPeRAgcRz1 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2495224 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2495224 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2495224 ']' 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.002 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.002 [2024-07-25 07:25:27.420052] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:55.002 [2024-07-25 07:25:27.420139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.002 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.002 [2024-07-25 07:25:27.489104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.260 [2024-07-25 07:25:27.602443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.260 [2024-07-25 07:25:27.602508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.260 [2024-07-25 07:25:27.602532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.260 [2024-07-25 07:25:27.602545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.260 [2024-07-25 07:25:27.602557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.260 [2024-07-25 07:25:27.602588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.858 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.858 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:55.858 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.858 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.858 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.115 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.115 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.vPeRAgcRz1 00:19:56.115 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vPeRAgcRz1 00:19:56.115 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.372 [2024-07-25 07:25:28.670118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.372 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.630 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.888 [2024-07-25 07:25:29.251664] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.888 [2024-07-25 07:25:29.251934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.888 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.145 malloc0 00:19:57.145 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.401 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:19:57.658 [2024-07-25 07:25:29.993400] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2495526 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2495526 /var/tmp/bdevperf.sock 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2495526 ']' 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.658 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 [2024-07-25 07:25:30.057735] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:57.658 [2024-07-25 07:25:30.057838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495526 ] 00:19:57.658 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.658 [2024-07-25 07:25:30.116938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.915 [2024-07-25 07:25:30.226394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.915 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.915 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:57.915 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:19:58.172 [2024-07-25 07:25:30.580381] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.172 [2024-07-25 07:25:30.580489] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:58.172 TLSTESTn1 00:19:58.172 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:58.738 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:58.738 "subsystems": [ 00:19:58.738 { 00:19:58.738 "subsystem": "keyring", 00:19:58.738 "config": [] 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "subsystem": "iobuf", 00:19:58.738 "config": [ 00:19:58.738 { 00:19:58.738 "method": "iobuf_set_options", 00:19:58.738 "params": { 00:19:58.738 "small_pool_count": 8192, 00:19:58.738 "large_pool_count": 1024, 00:19:58.738 "small_bufsize": 8192, 00:19:58.738 "large_bufsize": 135168 00:19:58.738 } 00:19:58.738 } 00:19:58.738 ] 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "subsystem": "sock", 00:19:58.738 "config": [ 00:19:58.738 { 00:19:58.738 "method": "sock_set_default_impl", 00:19:58.738 "params": { 00:19:58.738 "impl_name": "posix" 00:19:58.738 } 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "method": "sock_impl_set_options", 00:19:58.738 "params": { 00:19:58.738 "impl_name": "ssl", 00:19:58.738 "recv_buf_size": 4096, 00:19:58.738 "send_buf_size": 4096, 00:19:58.738 "enable_recv_pipe": true, 00:19:58.738 "enable_quickack": false, 00:19:58.738 "enable_placement_id": 0, 00:19:58.738 "enable_zerocopy_send_server": true, 00:19:58.738 "enable_zerocopy_send_client": false, 00:19:58.738 "zerocopy_threshold": 0, 00:19:58.738 "tls_version": 0, 00:19:58.738 "enable_ktls": false 00:19:58.738 } 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "method": "sock_impl_set_options", 00:19:58.738 "params": { 00:19:58.738 "impl_name": "posix", 00:19:58.738 "recv_buf_size": 2097152, 00:19:58.738 "send_buf_size": 2097152, 00:19:58.738 "enable_recv_pipe": true, 00:19:58.738 "enable_quickack": false, 00:19:58.738 "enable_placement_id": 0, 00:19:58.738 "enable_zerocopy_send_server": true, 00:19:58.738 "enable_zerocopy_send_client": false, 00:19:58.738 "zerocopy_threshold": 0, 00:19:58.738 "tls_version": 0, 00:19:58.738 "enable_ktls": false 00:19:58.738 } 00:19:58.738 } 00:19:58.738 ] 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "subsystem": "vmd", 00:19:58.738 "config": [] 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "subsystem": "accel", 00:19:58.738 "config": [ 00:19:58.738 { 00:19:58.738 "method": "accel_set_options", 00:19:58.738 "params": { 00:19:58.738 "small_cache_size": 128, 00:19:58.738 "large_cache_size": 16, 00:19:58.738 "task_count": 2048, 00:19:58.738 "sequence_count": 2048, 00:19:58.738 "buf_count": 2048 00:19:58.738 } 00:19:58.738 } 00:19:58.738 ] 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "subsystem": "bdev", 00:19:58.738 "config": [ 00:19:58.738 { 00:19:58.738 "method": "bdev_set_options", 00:19:58.738 "params": { 00:19:58.738 "bdev_io_pool_size": 65535, 00:19:58.738 "bdev_io_cache_size": 256, 00:19:58.738 "bdev_auto_examine": true, 00:19:58.738 "iobuf_small_cache_size": 128, 00:19:58.738 "iobuf_large_cache_size": 16 00:19:58.738 } 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "method": "bdev_raid_set_options", 00:19:58.738 "params": { 00:19:58.738 "process_window_size_kb": 1024, 00:19:58.738 "process_max_bandwidth_mb_sec": 0 00:19:58.738 } 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "method": "bdev_iscsi_set_options", 00:19:58.738 "params": { 00:19:58.738 "timeout_sec": 30 00:19:58.738 } 00:19:58.738 }, 00:19:58.738 { 00:19:58.738 "method": "bdev_nvme_set_options", 00:19:58.738 "params": { 00:19:58.738 "action_on_timeout": "none", 00:19:58.738 "timeout_us": 0, 00:19:58.738 "timeout_admin_us": 0, 00:19:58.738 "keep_alive_timeout_ms": 10000, 00:19:58.738 "arbitration_burst": 0, 00:19:58.738 "low_priority_weight": 0, 00:19:58.738 "medium_priority_weight": 0, 00:19:58.738 "high_priority_weight": 0, 00:19:58.738 "nvme_adminq_poll_period_us": 10000, 00:19:58.738 "nvme_ioq_poll_period_us": 0, 00:19:58.738 "io_queue_requests": 0, 00:19:58.738 "delay_cmd_submit": true, 00:19:58.738 "transport_retry_count": 4, 00:19:58.738 "bdev_retry_count": 3, 00:19:58.738 "transport_ack_timeout": 0, 00:19:58.738 "ctrlr_loss_timeout_sec": 0, 00:19:58.738 "reconnect_delay_sec": 0, 00:19:58.738 "fast_io_fail_timeout_sec": 0, 00:19:58.738 "disable_auto_failback": false, 00:19:58.738 "generate_uuids": false, 00:19:58.738 "transport_tos": 0, 00:19:58.739 "nvme_error_stat": false, 00:19:58.739 "rdma_srq_size": 0, 00:19:58.739 "io_path_stat": false, 00:19:58.739 "allow_accel_sequence": false, 00:19:58.739 "rdma_max_cq_size": 0, 00:19:58.739 "rdma_cm_event_timeout_ms": 0, 00:19:58.739 "dhchap_digests": [ 00:19:58.739 "sha256", 00:19:58.739 "sha384", 00:19:58.739 "sha512" 00:19:58.739 ], 00:19:58.739 "dhchap_dhgroups": [ 00:19:58.739 "null", 00:19:58.739 "ffdhe2048", 00:19:58.739 "ffdhe3072", 00:19:58.739 "ffdhe4096", 00:19:58.739 "ffdhe6144", 00:19:58.739 "ffdhe8192" 00:19:58.739 ] 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "bdev_nvme_set_hotplug", 00:19:58.739 "params": { 00:19:58.739 "period_us": 100000, 00:19:58.739 "enable": false 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "bdev_malloc_create", 00:19:58.739 "params": { 00:19:58.739 "name": "malloc0", 00:19:58.739 "num_blocks": 8192, 00:19:58.739 "block_size": 4096, 00:19:58.739 "physical_block_size": 4096, 00:19:58.739 "uuid": "1f2494ba-78c8-4789-8220-f4401025182f", 00:19:58.739 "optimal_io_boundary": 0, 00:19:58.739 "md_size": 0, 00:19:58.739 "dif_type": 0, 00:19:58.739 "dif_is_head_of_md": false, 00:19:58.739 "dif_pi_format": 0 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "bdev_wait_for_examine" 00:19:58.739 } 00:19:58.739 ] 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "subsystem": "nbd", 00:19:58.739 "config": [] 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "subsystem": "scheduler", 00:19:58.739 "config": [ 00:19:58.739 { 00:19:58.739 "method": "framework_set_scheduler", 00:19:58.739 "params": { 00:19:58.739 "name": "static" 00:19:58.739 } 00:19:58.739 } 00:19:58.739 ] 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "subsystem": "nvmf", 00:19:58.739 "config": [ 00:19:58.739 { 00:19:58.739 "method": "nvmf_set_config", 00:19:58.739 "params": { 00:19:58.739 "discovery_filter": "match_any", 00:19:58.739 "admin_cmd_passthru": { 00:19:58.739 "identify_ctrlr": false 00:19:58.739 } 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_set_max_subsystems", 00:19:58.739 "params": { 00:19:58.739 "max_subsystems": 1024 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_set_crdt", 00:19:58.739 "params": { 00:19:58.739 "crdt1": 0, 00:19:58.739 "crdt2": 0, 00:19:58.739 "crdt3": 0 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_create_transport", 00:19:58.739 "params": { 00:19:58.739 "trtype": "TCP", 00:19:58.739 "max_queue_depth": 128, 00:19:58.739 "max_io_qpairs_per_ctrlr": 127, 00:19:58.739 "in_capsule_data_size": 4096, 00:19:58.739 "max_io_size": 131072, 00:19:58.739 "io_unit_size": 131072, 00:19:58.739 "max_aq_depth": 128, 00:19:58.739 "num_shared_buffers": 511, 00:19:58.739 "buf_cache_size": 4294967295, 00:19:58.739 "dif_insert_or_strip": false, 00:19:58.739 "zcopy": false, 00:19:58.739 "c2h_success": false, 00:19:58.739 "sock_priority": 0, 00:19:58.739 "abort_timeout_sec": 1, 00:19:58.739 "ack_timeout": 0, 00:19:58.739 "data_wr_pool_size": 0 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_create_subsystem", 00:19:58.739 "params": { 00:19:58.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.739 "allow_any_host": false, 00:19:58.739 "serial_number": "SPDK00000000000001", 00:19:58.739 "model_number": "SPDK bdev Controller", 00:19:58.739 "max_namespaces": 10, 00:19:58.739 "min_cntlid": 1, 00:19:58.739 "max_cntlid": 65519, 00:19:58.739 "ana_reporting": false 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_subsystem_add_host", 00:19:58.739 "params": { 00:19:58.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.739 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.739 "psk": "/tmp/tmp.vPeRAgcRz1" 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_subsystem_add_ns", 00:19:58.739 "params": { 00:19:58.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.739 "namespace": { 00:19:58.739 "nsid": 1, 00:19:58.739 "bdev_name": "malloc0", 00:19:58.739 "nguid": "1F2494BA78C847898220F4401025182F", 00:19:58.739 "uuid": "1f2494ba-78c8-4789-8220-f4401025182f", 00:19:58.739 "no_auto_visible": false 00:19:58.739 } 00:19:58.739 } 00:19:58.739 }, 00:19:58.739 { 00:19:58.739 "method": "nvmf_subsystem_add_listener", 00:19:58.739 "params": { 00:19:58.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.739 "listen_address": { 00:19:58.739 "trtype": "TCP", 00:19:58.739 "adrfam": "IPv4", 00:19:58.739 "traddr": "10.0.0.2", 00:19:58.739 "trsvcid": "4420" 00:19:58.739 }, 00:19:58.739 "secure_channel": true 00:19:58.739 } 00:19:58.739 } 00:19:58.739 ] 00:19:58.739 } 00:19:58.739 ] 00:19:58.739 }' 00:19:58.739 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:58.997 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:58.997 "subsystems": [ 00:19:58.997 { 00:19:58.997 "subsystem": "keyring", 00:19:58.997 "config": [] 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "subsystem": "iobuf", 00:19:58.997 "config": [ 00:19:58.997 { 00:19:58.997 "method": "iobuf_set_options", 00:19:58.997 "params": { 00:19:58.997 "small_pool_count": 8192, 00:19:58.997 "large_pool_count": 1024, 00:19:58.997 "small_bufsize": 8192, 00:19:58.997 "large_bufsize": 135168 00:19:58.997 } 00:19:58.997 } 00:19:58.997 ] 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "subsystem": "sock", 00:19:58.997 "config": [ 00:19:58.997 { 00:19:58.997 "method": "sock_set_default_impl", 00:19:58.997 "params": { 00:19:58.997 "impl_name": "posix" 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "sock_impl_set_options", 00:19:58.997 "params": { 00:19:58.997 "impl_name": "ssl", 00:19:58.997 "recv_buf_size": 4096, 00:19:58.997 "send_buf_size": 4096, 00:19:58.997 "enable_recv_pipe": true, 00:19:58.997 "enable_quickack": false, 00:19:58.997 "enable_placement_id": 0, 00:19:58.997 "enable_zerocopy_send_server": true, 00:19:58.997 "enable_zerocopy_send_client": false, 00:19:58.997 "zerocopy_threshold": 0, 00:19:58.997 "tls_version": 0, 00:19:58.997 "enable_ktls": false 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "sock_impl_set_options", 00:19:58.997 "params": { 00:19:58.997 "impl_name": "posix", 00:19:58.997 "recv_buf_size": 2097152, 00:19:58.997 "send_buf_size": 2097152, 00:19:58.997 "enable_recv_pipe": true, 00:19:58.997 "enable_quickack": false, 00:19:58.997 "enable_placement_id": 0, 00:19:58.997 "enable_zerocopy_send_server": true, 00:19:58.997 "enable_zerocopy_send_client": false, 00:19:58.997 "zerocopy_threshold": 0, 00:19:58.997 "tls_version": 0, 00:19:58.997 "enable_ktls": false 00:19:58.997 } 00:19:58.997 } 00:19:58.997 ] 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "subsystem": "vmd", 00:19:58.997 "config": [] 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "subsystem": "accel", 00:19:58.997 "config": [ 00:19:58.997 { 00:19:58.997 "method": "accel_set_options", 00:19:58.997 "params": { 00:19:58.997 "small_cache_size": 128, 00:19:58.997 "large_cache_size": 16, 00:19:58.997 "task_count": 2048, 00:19:58.997 "sequence_count": 2048, 00:19:58.997 "buf_count": 2048 00:19:58.997 } 00:19:58.997 } 00:19:58.997 ] 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "subsystem": "bdev", 00:19:58.997 "config": [ 00:19:58.997 { 00:19:58.997 "method": "bdev_set_options", 00:19:58.997 "params": { 00:19:58.997 "bdev_io_pool_size": 65535, 00:19:58.997 "bdev_io_cache_size": 256, 00:19:58.997 "bdev_auto_examine": true, 00:19:58.997 "iobuf_small_cache_size": 128, 00:19:58.997 "iobuf_large_cache_size": 16 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "bdev_raid_set_options", 00:19:58.997 "params": { 00:19:58.997 "process_window_size_kb": 1024, 00:19:58.997 "process_max_bandwidth_mb_sec": 0 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "bdev_iscsi_set_options", 00:19:58.997 "params": { 00:19:58.997 "timeout_sec": 30 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "bdev_nvme_set_options", 00:19:58.997 "params": { 00:19:58.997 "action_on_timeout": "none", 00:19:58.997 "timeout_us": 0, 00:19:58.997 "timeout_admin_us": 0, 00:19:58.997 "keep_alive_timeout_ms": 10000, 00:19:58.997 "arbitration_burst": 0, 00:19:58.997 "low_priority_weight": 0, 00:19:58.997 "medium_priority_weight": 0, 00:19:58.997 "high_priority_weight": 0, 00:19:58.997 "nvme_adminq_poll_period_us": 10000, 00:19:58.997 "nvme_ioq_poll_period_us": 0, 00:19:58.997 "io_queue_requests": 512, 00:19:58.997 "delay_cmd_submit": true, 00:19:58.997 "transport_retry_count": 4, 00:19:58.997 "bdev_retry_count": 3, 00:19:58.997 "transport_ack_timeout": 0, 00:19:58.997 "ctrlr_loss_timeout_sec": 0, 00:19:58.997 "reconnect_delay_sec": 0, 00:19:58.997 "fast_io_fail_timeout_sec": 0, 00:19:58.997 "disable_auto_failback": false, 00:19:58.997 "generate_uuids": false, 00:19:58.997 "transport_tos": 0, 00:19:58.997 "nvme_error_stat": false, 00:19:58.997 "rdma_srq_size": 0, 00:19:58.997 "io_path_stat": false, 00:19:58.997 "allow_accel_sequence": false, 00:19:58.997 "rdma_max_cq_size": 0, 00:19:58.997 "rdma_cm_event_timeout_ms": 0, 00:19:58.997 "dhchap_digests": [ 00:19:58.997 "sha256", 00:19:58.997 "sha384", 00:19:58.997 "sha512" 00:19:58.997 ], 00:19:58.997 "dhchap_dhgroups": [ 00:19:58.997 "null", 00:19:58.997 "ffdhe2048", 00:19:58.997 "ffdhe3072", 00:19:58.997 "ffdhe4096", 00:19:58.997 "ffdhe6144", 00:19:58.997 "ffdhe8192" 00:19:58.997 ] 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "bdev_nvme_attach_controller", 00:19:58.997 "params": { 00:19:58.997 "name": "TLSTEST", 00:19:58.997 "trtype": "TCP", 00:19:58.997 "adrfam": "IPv4", 00:19:58.997 "traddr": "10.0.0.2", 00:19:58.997 "trsvcid": "4420", 00:19:58.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.997 "prchk_reftag": false, 00:19:58.997 "prchk_guard": false, 00:19:58.997 "ctrlr_loss_timeout_sec": 0, 00:19:58.997 "reconnect_delay_sec": 0, 00:19:58.997 "fast_io_fail_timeout_sec": 0, 00:19:58.997 "psk": "/tmp/tmp.vPeRAgcRz1", 00:19:58.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.997 "hdgst": false, 00:19:58.997 "ddgst": false 00:19:58.997 } 00:19:58.997 }, 00:19:58.997 { 00:19:58.997 "method": "bdev_nvme_set_hotplug", 00:19:58.997 "params": { 00:19:58.998 "period_us": 100000, 00:19:58.998 "enable": false 00:19:58.998 } 00:19:58.998 }, 00:19:58.998 { 00:19:58.998 "method": "bdev_wait_for_examine" 00:19:58.998 } 00:19:58.998 ] 00:19:58.998 }, 00:19:58.998 { 00:19:58.998 "subsystem": "nbd", 00:19:58.998 "config": [] 00:19:58.998 } 00:19:58.998 ] 00:19:58.998 }' 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2495526 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2495526 ']' 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2495526 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495526 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495526' 00:19:58.998 killing process with pid 2495526 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2495526 00:19:58.998 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.998 00:19:58.998 Latency(us) 00:19:58.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.998 =================================================================================================================== 00:19:58.998 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.998 [2024-07-25 07:25:31.335826] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.998 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2495526 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2495224 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2495224 ']' 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2495224 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495224 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495224' 00:19:59.255 killing process with pid 2495224 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2495224 00:19:59.255 [2024-07-25 07:25:31.631207] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:59.255 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2495224 00:19:59.514 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:59.514 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.514 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:59.514 "subsystems": [ 00:19:59.514 { 00:19:59.514 "subsystem": "keyring", 00:19:59.514 "config": [] 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "subsystem": "iobuf", 00:19:59.514 "config": [ 00:19:59.514 { 00:19:59.514 "method": "iobuf_set_options", 00:19:59.514 "params": { 00:19:59.514 "small_pool_count": 8192, 00:19:59.514 "large_pool_count": 1024, 00:19:59.514 "small_bufsize": 8192, 00:19:59.514 "large_bufsize": 135168 00:19:59.514 } 00:19:59.514 } 00:19:59.514 ] 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "subsystem": "sock", 00:19:59.514 "config": [ 00:19:59.514 { 00:19:59.514 "method": "sock_set_default_impl", 00:19:59.514 "params": { 00:19:59.514 "impl_name": "posix" 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "sock_impl_set_options", 00:19:59.514 "params": { 00:19:59.514 "impl_name": "ssl", 00:19:59.514 "recv_buf_size": 4096, 00:19:59.514 "send_buf_size": 4096, 00:19:59.514 "enable_recv_pipe": true, 00:19:59.514 "enable_quickack": false, 00:19:59.514 "enable_placement_id": 0, 00:19:59.514 "enable_zerocopy_send_server": true, 00:19:59.514 "enable_zerocopy_send_client": false, 00:19:59.514 "zerocopy_threshold": 0, 00:19:59.514 "tls_version": 0, 00:19:59.514 "enable_ktls": false 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "sock_impl_set_options", 00:19:59.514 "params": { 00:19:59.514 "impl_name": "posix", 00:19:59.514 "recv_buf_size": 2097152, 00:19:59.514 "send_buf_size": 2097152, 00:19:59.514 "enable_recv_pipe": true, 00:19:59.514 "enable_quickack": false, 00:19:59.514 "enable_placement_id": 0, 00:19:59.514 "enable_zerocopy_send_server": true, 00:19:59.514 "enable_zerocopy_send_client": false, 00:19:59.514 "zerocopy_threshold": 0, 00:19:59.514 "tls_version": 0, 00:19:59.514 "enable_ktls": false 00:19:59.514 } 00:19:59.514 } 00:19:59.514 ] 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "subsystem": "vmd", 00:19:59.514 "config": [] 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "subsystem": "accel", 00:19:59.514 "config": [ 00:19:59.514 { 00:19:59.514 "method": "accel_set_options", 00:19:59.514 "params": { 00:19:59.514 "small_cache_size": 128, 00:19:59.514 "large_cache_size": 16, 00:19:59.514 "task_count": 2048, 00:19:59.514 "sequence_count": 2048, 00:19:59.514 "buf_count": 2048 00:19:59.514 } 00:19:59.514 } 00:19:59.514 ] 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "subsystem": "bdev", 00:19:59.514 "config": [ 00:19:59.514 { 00:19:59.514 "method": "bdev_set_options", 00:19:59.514 "params": { 00:19:59.514 "bdev_io_pool_size": 65535, 00:19:59.514 "bdev_io_cache_size": 256, 00:19:59.514 "bdev_auto_examine": true, 00:19:59.514 "iobuf_small_cache_size": 128, 00:19:59.514 "iobuf_large_cache_size": 16 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "bdev_raid_set_options", 00:19:59.514 "params": { 00:19:59.514 "process_window_size_kb": 1024, 00:19:59.514 "process_max_bandwidth_mb_sec": 0 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "bdev_iscsi_set_options", 00:19:59.514 "params": { 00:19:59.514 "timeout_sec": 30 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "bdev_nvme_set_options", 00:19:59.514 "params": { 00:19:59.514 "action_on_timeout": "none", 00:19:59.514 "timeout_us": 0, 00:19:59.514 "timeout_admin_us": 0, 00:19:59.514 "keep_alive_timeout_ms": 10000, 00:19:59.514 "arbitration_burst": 0, 00:19:59.514 "low_priority_weight": 0, 00:19:59.514 "medium_priority_weight": 0, 00:19:59.514 "high_priority_weight": 0, 00:19:59.514 "nvme_adminq_poll_period_us": 10000, 00:19:59.514 "nvme_ioq_poll_period_us": 0, 00:19:59.514 "io_queue_requests": 0, 00:19:59.514 "delay_cmd_submit": true, 00:19:59.514 "transport_retry_count": 4, 00:19:59.514 "bdev_retry_count": 3, 00:19:59.514 "transport_ack_timeout": 0, 00:19:59.514 "ctrlr_loss_timeout_sec": 0, 00:19:59.514 "reconnect_delay_sec": 0, 00:19:59.514 "fast_io_fail_timeout_sec": 0, 00:19:59.514 "disable_auto_failback": false, 00:19:59.514 "generate_uuids": false, 00:19:59.514 "transport_tos": 0, 00:19:59.514 "nvme_error_stat": false, 00:19:59.514 "rdma_srq_size": 0, 00:19:59.514 "io_path_stat": false, 00:19:59.514 "allow_accel_sequence": false, 00:19:59.514 "rdma_max_cq_size": 0, 00:19:59.514 "rdma_cm_event_timeout_ms": 0, 00:19:59.514 "dhchap_digests": [ 00:19:59.514 "sha256", 00:19:59.514 "sha384", 00:19:59.514 "sha512" 00:19:59.514 ], 00:19:59.514 "dhchap_dhgroups": [ 00:19:59.514 "null", 00:19:59.514 "ffdhe2048", 00:19:59.514 "ffdhe3072", 00:19:59.514 "ffdhe4096", 00:19:59.514 "ffdhe6144", 00:19:59.514 "ffdhe8192" 00:19:59.514 ] 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "bdev_nvme_set_hotplug", 00:19:59.514 "params": { 00:19:59.514 "period_us": 100000, 00:19:59.514 "enable": false 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "bdev_malloc_create", 00:19:59.514 "params": { 00:19:59.514 "name": "malloc0", 00:19:59.514 "num_blocks": 8192, 00:19:59.514 "block_size": 4096, 00:19:59.514 "physical_block_size": 4096, 00:19:59.514 "uuid": "1f2494ba-78c8-4789-8220-f4401025182f", 00:19:59.514 "optimal_io_boundary": 0, 00:19:59.514 "md_size": 0, 00:19:59.514 "dif_type": 0, 00:19:59.514 "dif_is_head_of_md": false, 00:19:59.514 "dif_pi_format": 0 00:19:59.514 } 00:19:59.514 }, 00:19:59.514 { 00:19:59.514 "method": "bdev_wait_for_examine" 00:19:59.514 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "nbd", 00:19:59.515 "config": [] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "scheduler", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "framework_set_scheduler", 00:19:59.515 "params": { 00:19:59.515 "name": "static" 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "nvmf", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "nvmf_set_config", 00:19:59.515 "params": { 00:19:59.515 "discovery_filter": "match_any", 00:19:59.515 "admin_cmd_passthru": { 00:19:59.515 "identify_ctrlr": false 00:19:59.515 } 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_set_max_subsystems", 00:19:59.515 "params": { 00:19:59.515 "max_subsystems": 1024 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_set_crdt", 00:19:59.515 "params": { 00:19:59.515 "crdt1": 0, 00:19:59.515 "crdt2": 0, 00:19:59.515 "crdt3": 0 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_create_transport", 00:19:59.515 "params": { 00:19:59.515 "trtype": "TCP", 00:19:59.515 "max_queue_depth": 128, 00:19:59.515 "max_io_qpairs_per_ctrlr": 127, 00:19:59.515 "in_capsule_data_size": 4096, 00:19:59.515 "max_io_size": 131072, 00:19:59.515 "io_unit_size": 131072, 00:19:59.515 "max_aq_depth": 128, 00:19:59.515 "num_shared_buffers": 511, 00:19:59.515 "buf_cache_size": 4294967295, 00:19:59.515 "dif_insert_or_strip": false, 00:19:59.515 "zcopy": false, 00:19:59.515 "c2h_success": false, 00:19:59.515 "sock_priority": 0, 00:19:59.515 "abort_timeout_sec": 1, 00:19:59.515 "ack_timeout": 0, 00:19:59.515 "data_wr_pool_size": 0 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_create_subsystem", 00:19:59.515 "params": { 00:19:59.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.515 "allow_any_host": false, 00:19:59.515 "serial_number": "SPDK00000000000001", 00:19:59.515 "model_number": "SPDK bdev Controller", 00:19:59.515 "max_namespaces": 10, 00:19:59.515 "min_cntlid": 1, 00:19:59.515 "max_cntlid": 65519, 00:19:59.515 "ana_reporting": false 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_subsystem_add_host", 00:19:59.515 "params": { 00:19:59.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.515 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.515 "psk": "/tmp/tmp.vPeRAgcRz1" 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_subsystem_add_ns", 00:19:59.515 "params": { 00:19:59.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.515 "namespace": { 00:19:59.515 "nsid": 1, 00:19:59.515 "bdev_name": "malloc0", 00:19:59.515 "nguid": "1F2494BA78C847898220F4401025182F", 00:19:59.515 "uuid": "1f2494ba-78c8-4789-8220-f4401025182f", 00:19:59.515 "no_auto_visible": false 00:19:59.515 } 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "nvmf_subsystem_add_listener", 00:19:59.515 "params": { 00:19:59.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.515 "listen_address": { 00:19:59.515 "trtype": "TCP", 00:19:59.515 "adrfam": "IPv4", 00:19:59.515 "traddr": "10.0.0.2", 00:19:59.515 "trsvcid": "4420" 00:19:59.515 }, 00:19:59.515 "secure_channel": true 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }' 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2495804 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2495804 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2495804 ']' 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.515 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.515 [2024-07-25 07:25:31.988058] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:19:59.515 [2024-07-25 07:25:31.988147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.773 [2024-07-25 07:25:32.055276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.773 [2024-07-25 07:25:32.166920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.773 [2024-07-25 07:25:32.166985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.773 [2024-07-25 07:25:32.167001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.773 [2024-07-25 07:25:32.167015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.773 [2024-07-25 07:25:32.167026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.773 [2024-07-25 07:25:32.167112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.031 [2024-07-25 07:25:32.407269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.031 [2024-07-25 07:25:32.434979] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:00.031 [2024-07-25 07:25:32.451050] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.031 [2024-07-25 07:25:32.451339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2495955 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2495955 /var/tmp/bdevperf.sock 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2495955 ']' 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.597 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:00.597 "subsystems": [ 00:20:00.597 { 00:20:00.597 "subsystem": "keyring", 00:20:00.597 "config": [] 00:20:00.597 }, 00:20:00.597 { 00:20:00.597 "subsystem": "iobuf", 00:20:00.597 "config": [ 00:20:00.597 { 00:20:00.597 "method": "iobuf_set_options", 00:20:00.598 "params": { 00:20:00.598 "small_pool_count": 8192, 00:20:00.598 "large_pool_count": 1024, 00:20:00.598 "small_bufsize": 8192, 00:20:00.598 "large_bufsize": 135168 00:20:00.598 } 00:20:00.598 } 00:20:00.598 ] 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "subsystem": "sock", 00:20:00.598 "config": [ 00:20:00.598 { 00:20:00.598 "method": "sock_set_default_impl", 00:20:00.598 "params": { 00:20:00.598 "impl_name": "posix" 00:20:00.598 } 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "method": "sock_impl_set_options", 00:20:00.598 "params": { 00:20:00.598 "impl_name": "ssl", 00:20:00.598 "recv_buf_size": 4096, 00:20:00.598 "send_buf_size": 4096, 00:20:00.598 "enable_recv_pipe": true, 00:20:00.598 "enable_quickack": false, 00:20:00.598 "enable_placement_id": 0, 00:20:00.598 "enable_zerocopy_send_server": true, 00:20:00.598 "enable_zerocopy_send_client": false, 00:20:00.598 "zerocopy_threshold": 0, 00:20:00.598 "tls_version": 0, 00:20:00.598 "enable_ktls": false 00:20:00.598 } 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "method": "sock_impl_set_options", 00:20:00.598 "params": { 00:20:00.598 "impl_name": "posix", 00:20:00.598 "recv_buf_size": 2097152, 00:20:00.598 "send_buf_size": 2097152, 00:20:00.598 "enable_recv_pipe": true, 00:20:00.598 "enable_quickack": false, 00:20:00.598 "enable_placement_id": 0, 00:20:00.598 "enable_zerocopy_send_server": true, 00:20:00.598 "enable_zerocopy_send_client": false, 00:20:00.598 "zerocopy_threshold": 0, 00:20:00.598 "tls_version": 0, 00:20:00.598 "enable_ktls": false 00:20:00.598 } 00:20:00.598 } 00:20:00.598 ] 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "subsystem": "vmd", 00:20:00.598 "config": [] 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "subsystem": "accel", 00:20:00.598 "config": [ 00:20:00.598 { 00:20:00.598 "method": "accel_set_options", 00:20:00.598 "params": { 00:20:00.598 "small_cache_size": 128, 00:20:00.598 "large_cache_size": 16, 00:20:00.598 "task_count": 2048, 00:20:00.598 "sequence_count": 2048, 00:20:00.598 "buf_count": 2048 00:20:00.598 } 00:20:00.598 } 00:20:00.598 ] 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "subsystem": "bdev", 00:20:00.598 "config": [ 00:20:00.598 { 00:20:00.598 "method": "bdev_set_options", 00:20:00.598 "params": { 00:20:00.598 "bdev_io_pool_size": 65535, 00:20:00.598 "bdev_io_cache_size": 256, 00:20:00.598 "bdev_auto_examine": true, 00:20:00.598 "iobuf_small_cache_size": 128, 00:20:00.598 "iobuf_large_cache_size": 16 00:20:00.598 } 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "method": "bdev_raid_set_options", 00:20:00.598 "params": { 00:20:00.598 "process_window_size_kb": 1024, 00:20:00.598 "process_max_bandwidth_mb_sec": 0 00:20:00.598 } 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "method": "bdev_iscsi_set_options", 00:20:00.598 "params": { 00:20:00.598 "timeout_sec": 30 00:20:00.598 } 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "method": "bdev_nvme_set_options", 00:20:00.598 "params": { 00:20:00.598 "action_on_timeout": "none", 00:20:00.598 "timeout_us": 0, 00:20:00.598 "timeout_admin_us": 0, 00:20:00.598 "keep_alive_timeout_ms": 10000, 00:20:00.598 "arbitration_burst": 0, 00:20:00.598 "low_priority_weight": 0, 00:20:00.598 "medium_priority_weight": 0, 00:20:00.598 "high_priority_weight": 0, 00:20:00.598 "nvme_adminq_poll_period_us": 10000, 00:20:00.598 "nvme_ioq_poll_period_us": 0, 00:20:00.598 "io_queue_requests": 512, 00:20:00.598 "delay_cmd_submit": true, 00:20:00.598 "transport_retry_count": 4, 00:20:00.598 "bdev_retry_count": 3, 00:20:00.598 "transport_ack_timeout": 0, 00:20:00.598 "ctrlr_loss_timeout_sec": 0, 00:20:00.598 "reconnect_delay_sec": 0, 00:20:00.598 "fast_io_fail_timeout_sec": 0, 00:20:00.598 "disable_auto_failback": false, 00:20:00.598 "generate_uuids": false, 00:20:00.598 "transport_tos": 0, 00:20:00.598 "nvme_error_stat": false, 00:20:00.598 "rdma_srq_size": 0, 00:20:00.598 "io_path_stat": false, 00:20:00.598 "allow_accel_sequence": false, 00:20:00.598 "rdma_max_cq_size": 0, 00:20:00.598 "rdma_cm_event_timeout_ms": 0, 00:20:00.598 "dhchap_digests": [ 00:20:00.598 "sha256", 00:20:00.598 "sha384", 00:20:00.598 "sha512" 00:20:00.598 ], 00:20:00.598 "dhchap_dhgroups": [ 00:20:00.598 "null", 00:20:00.598 "ffdhe2048", 00:20:00.598 "ffdhe3072", 00:20:00.598 "ffdhe4096", 00:20:00.598 "ffdhe6144", 00:20:00.598 "ffdhe8192" 00:20:00.598 ] 00:20:00.598 } 00:20:00.598 }, 00:20:00.598 { 00:20:00.598 "method": "bdev_nvme_attach_controller", 00:20:00.598 "params": { 00:20:00.598 "name": "TLSTEST", 00:20:00.598 "trtype": "TCP", 00:20:00.599 "adrfam": "IPv4", 00:20:00.599 "traddr": "10.0.0.2", 00:20:00.599 "trsvcid": "4420", 00:20:00.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.599 "prchk_reftag": false, 00:20:00.599 "prchk_guard": false, 00:20:00.599 "ctrlr_loss_timeout_sec": 0, 00:20:00.599 "reconnect_delay_sec": 0, 00:20:00.599 "fast_io_fail_timeout_sec": 0, 00:20:00.599 "psk": "/tmp/tmp.vPeRAgcRz1", 00:20:00.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.599 "hdgst": false, 00:20:00.599 "ddgst": false 00:20:00.599 } 00:20:00.599 }, 00:20:00.599 { 00:20:00.599 "method": "bdev_nvme_set_hotplug", 00:20:00.599 "params": { 00:20:00.599 "period_us": 100000, 00:20:00.599 "enable": false 00:20:00.599 } 00:20:00.599 }, 00:20:00.599 { 00:20:00.599 "method": "bdev_wait_for_examine" 00:20:00.599 } 00:20:00.599 ] 00:20:00.599 }, 00:20:00.599 { 00:20:00.599 "subsystem": "nbd", 00:20:00.599 "config": [] 00:20:00.599 } 00:20:00.599 ] 00:20:00.599 }' 00:20:00.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.599 [2024-07-25 07:25:32.987535] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:00.599 [2024-07-25 07:25:32.987634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495955 ] 00:20:00.599 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.599 [2024-07-25 07:25:33.047464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.857 [2024-07-25 07:25:33.157274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.857 [2024-07-25 07:25:33.327204] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.857 [2024-07-25 07:25:33.327369] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:01.788 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.788 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.788 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:01.788 Running I/O for 10 seconds... 00:20:11.749 00:20:11.749 Latency(us) 00:20:11.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.749 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:11.749 Verification LBA range: start 0x0 length 0x2000 00:20:11.749 TLSTESTn1 : 10.04 2960.51 11.56 0.00 0.00 43128.37 6068.15 62914.56 00:20:11.749 =================================================================================================================== 00:20:11.749 Total : 2960.51 11.56 0.00 0.00 43128.37 6068.15 62914.56 00:20:11.749 0 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2495955 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2495955 ']' 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2495955 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495955 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495955' 00:20:11.749 killing process with pid 2495955 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2495955 00:20:11.749 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.749 00:20:11.749 Latency(us) 00:20:11.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.749 =================================================================================================================== 00:20:11.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.749 [2024-07-25 07:25:44.208584] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:11.749 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2495955 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2495804 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2495804 ']' 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2495804 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495804 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495804' 00:20:12.006 killing process with pid 2495804 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2495804 00:20:12.006 [2024-07-25 07:25:44.506117] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:12.006 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2495804 00:20:12.264 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:12.264 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.264 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.264 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2497301 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2497301 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2497301 ']' 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.522 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.522 [2024-07-25 07:25:44.842689] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:12.523 [2024-07-25 07:25:44.842787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.523 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.523 [2024-07-25 07:25:44.905460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.523 [2024-07-25 07:25:45.017681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.523 [2024-07-25 07:25:45.017745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.523 [2024-07-25 07:25:45.017772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.523 [2024-07-25 07:25:45.017786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.523 [2024-07-25 07:25:45.017798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.523 [2024-07-25 07:25:45.017834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.vPeRAgcRz1 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vPeRAgcRz1 00:20:13.455 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:13.712 [2024-07-25 07:25:46.088641] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.712 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:13.970 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.228 [2024-07-25 07:25:46.658256] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.228 [2024-07-25 07:25:46.658589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.228 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.486 malloc0 00:20:14.486 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vPeRAgcRz1 00:20:15.051 [2024-07-25 07:25:47.552465] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2497705 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2497705 /var/tmp/bdevperf.sock 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2497705 ']' 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.051 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.309 [2024-07-25 07:25:47.619437] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:15.309 [2024-07-25 07:25:47.619517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497705 ] 00:20:15.309 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.309 [2024-07-25 07:25:47.680823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.309 [2024-07-25 07:25:47.791401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.595 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.595 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:15.595 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vPeRAgcRz1 00:20:15.852 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:16.110 [2024-07-25 07:25:48.430234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.110 nvme0n1 00:20:16.110 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.367 Running I/O for 1 seconds... 00:20:17.300 00:20:17.300 Latency(us) 00:20:17.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.300 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:17.300 Verification LBA range: start 0x0 length 0x2000 00:20:17.300 nvme0n1 : 1.04 2896.20 11.31 0.00 0.00 43353.71 7961.41 71458.51 00:20:17.300 =================================================================================================================== 00:20:17.300 Total : 2896.20 11.31 0.00 0.00 43353.71 7961.41 71458.51 00:20:17.300 0 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2497705 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2497705 ']' 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2497705 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497705 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497705' 00:20:17.300 killing process with pid 2497705 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2497705 00:20:17.300 Received shutdown signal, test time was about 1.000000 seconds 00:20:17.300 00:20:17.300 Latency(us) 00:20:17.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.300 =================================================================================================================== 00:20:17.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.300 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2497705 00:20:17.557 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2497301 00:20:17.557 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2497301 ']' 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2497301 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497301 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497301' 00:20:17.557 killing process with pid 2497301 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2497301 00:20:17.557 [2024-07-25 07:25:50.030733] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.557 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2497301 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2498001 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2498001 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2498001 ']' 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.814 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.072 [2024-07-25 07:25:50.383350] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:18.072 [2024-07-25 07:25:50.383442] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.072 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.072 [2024-07-25 07:25:50.450991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.072 [2024-07-25 07:25:50.562619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.072 [2024-07-25 07:25:50.562681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.072 [2024-07-25 07:25:50.562707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.072 [2024-07-25 07:25:50.562721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.072 [2024-07-25 07:25:50.562732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.072 [2024-07-25 07:25:50.562762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 [2024-07-25 07:25:51.341733] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.006 malloc0 00:20:19.006 [2024-07-25 07:25:51.373371] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.006 [2024-07-25 07:25:51.387449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2498152 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2498152 /var/tmp/bdevperf.sock 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2498152 ']' 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.006 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 [2024-07-25 07:25:51.451554] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:19.006 [2024-07-25 07:25:51.451616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498152 ] 00:20:19.006 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.006 [2024-07-25 07:25:51.512356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.264 [2024-07-25 07:25:51.628432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.265 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.265 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:19.265 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vPeRAgcRz1 00:20:19.523 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:19.781 [2024-07-25 07:25:52.227490] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.781 nvme0n1 00:20:20.038 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:20.038 Running I/O for 1 seconds... 00:20:20.970 00:20:20.970 Latency(us) 00:20:20.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.970 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.970 Verification LBA range: start 0x0 length 0x2000 00:20:20.970 nvme0n1 : 1.04 2671.08 10.43 0.00 0.00 47005.74 6407.96 75730.49 00:20:20.970 =================================================================================================================== 00:20:20.970 Total : 2671.08 10.43 0.00 0.00 47005.74 6407.96 75730.49 00:20:20.970 0 00:20:20.970 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:20.970 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.970 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.229 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.229 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:21.229 "subsystems": [ 00:20:21.229 { 00:20:21.229 "subsystem": "keyring", 00:20:21.229 "config": [ 00:20:21.229 { 00:20:21.229 "method": "keyring_file_add_key", 00:20:21.229 "params": { 00:20:21.229 "name": "key0", 00:20:21.229 "path": "/tmp/tmp.vPeRAgcRz1" 00:20:21.229 } 00:20:21.229 } 00:20:21.229 ] 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "subsystem": "iobuf", 00:20:21.229 "config": [ 00:20:21.229 { 00:20:21.229 "method": "iobuf_set_options", 00:20:21.229 "params": { 00:20:21.229 "small_pool_count": 8192, 00:20:21.229 "large_pool_count": 1024, 00:20:21.229 "small_bufsize": 8192, 00:20:21.229 "large_bufsize": 135168 00:20:21.229 } 00:20:21.229 } 00:20:21.229 ] 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "subsystem": "sock", 00:20:21.229 "config": [ 00:20:21.229 { 00:20:21.229 "method": "sock_set_default_impl", 00:20:21.229 "params": { 00:20:21.229 "impl_name": "posix" 00:20:21.229 } 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "method": "sock_impl_set_options", 00:20:21.229 "params": { 00:20:21.229 "impl_name": "ssl", 00:20:21.229 "recv_buf_size": 4096, 00:20:21.229 "send_buf_size": 4096, 00:20:21.229 "enable_recv_pipe": true, 00:20:21.229 "enable_quickack": false, 00:20:21.229 "enable_placement_id": 0, 00:20:21.229 "enable_zerocopy_send_server": true, 00:20:21.229 "enable_zerocopy_send_client": false, 00:20:21.229 "zerocopy_threshold": 0, 00:20:21.229 "tls_version": 0, 00:20:21.229 "enable_ktls": false 00:20:21.229 } 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "method": "sock_impl_set_options", 00:20:21.229 "params": { 00:20:21.229 "impl_name": "posix", 00:20:21.229 "recv_buf_size": 2097152, 00:20:21.229 "send_buf_size": 2097152, 00:20:21.229 "enable_recv_pipe": true, 00:20:21.229 "enable_quickack": false, 00:20:21.229 "enable_placement_id": 0, 00:20:21.229 "enable_zerocopy_send_server": true, 00:20:21.229 "enable_zerocopy_send_client": false, 00:20:21.229 "zerocopy_threshold": 0, 00:20:21.229 "tls_version": 0, 00:20:21.229 "enable_ktls": false 00:20:21.229 } 00:20:21.229 } 00:20:21.229 ] 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "subsystem": "vmd", 00:20:21.229 "config": [] 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "subsystem": "accel", 00:20:21.229 "config": [ 00:20:21.229 { 00:20:21.229 "method": "accel_set_options", 00:20:21.229 "params": { 00:20:21.229 "small_cache_size": 128, 00:20:21.229 "large_cache_size": 16, 00:20:21.229 "task_count": 2048, 00:20:21.229 "sequence_count": 2048, 00:20:21.229 "buf_count": 2048 00:20:21.229 } 00:20:21.229 } 00:20:21.229 ] 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "subsystem": "bdev", 00:20:21.229 "config": [ 00:20:21.229 { 00:20:21.229 "method": "bdev_set_options", 00:20:21.229 "params": { 00:20:21.229 "bdev_io_pool_size": 65535, 00:20:21.229 "bdev_io_cache_size": 256, 00:20:21.229 "bdev_auto_examine": true, 00:20:21.229 "iobuf_small_cache_size": 128, 00:20:21.229 "iobuf_large_cache_size": 16 00:20:21.229 } 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "method": "bdev_raid_set_options", 00:20:21.229 "params": { 00:20:21.229 "process_window_size_kb": 1024, 00:20:21.229 "process_max_bandwidth_mb_sec": 0 00:20:21.229 } 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "method": "bdev_iscsi_set_options", 00:20:21.229 "params": { 00:20:21.229 "timeout_sec": 30 00:20:21.229 } 00:20:21.229 }, 00:20:21.229 { 00:20:21.229 "method": "bdev_nvme_set_options", 00:20:21.229 "params": { 00:20:21.229 "action_on_timeout": "none", 00:20:21.229 "timeout_us": 0, 00:20:21.229 "timeout_admin_us": 0, 00:20:21.229 "keep_alive_timeout_ms": 10000, 00:20:21.229 "arbitration_burst": 0, 00:20:21.229 "low_priority_weight": 0, 00:20:21.229 "medium_priority_weight": 0, 00:20:21.229 "high_priority_weight": 0, 00:20:21.229 "nvme_adminq_poll_period_us": 10000, 00:20:21.229 "nvme_ioq_poll_period_us": 0, 00:20:21.229 "io_queue_requests": 0, 00:20:21.229 "delay_cmd_submit": true, 00:20:21.229 "transport_retry_count": 4, 00:20:21.229 "bdev_retry_count": 3, 00:20:21.229 "transport_ack_timeout": 0, 00:20:21.229 "ctrlr_loss_timeout_sec": 0, 00:20:21.229 "reconnect_delay_sec": 0, 00:20:21.229 "fast_io_fail_timeout_sec": 0, 00:20:21.229 "disable_auto_failback": false, 00:20:21.229 "generate_uuids": false, 00:20:21.229 "transport_tos": 0, 00:20:21.229 "nvme_error_stat": false, 00:20:21.229 "rdma_srq_size": 0, 00:20:21.229 "io_path_stat": false, 00:20:21.230 "allow_accel_sequence": false, 00:20:21.230 "rdma_max_cq_size": 0, 00:20:21.230 "rdma_cm_event_timeout_ms": 0, 00:20:21.230 "dhchap_digests": [ 00:20:21.230 "sha256", 00:20:21.230 "sha384", 00:20:21.230 "sha512" 00:20:21.230 ], 00:20:21.230 "dhchap_dhgroups": [ 00:20:21.230 "null", 00:20:21.230 "ffdhe2048", 00:20:21.230 "ffdhe3072", 00:20:21.230 "ffdhe4096", 00:20:21.230 "ffdhe6144", 00:20:21.230 "ffdhe8192" 00:20:21.230 ] 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "bdev_nvme_set_hotplug", 00:20:21.230 "params": { 00:20:21.230 "period_us": 100000, 00:20:21.230 "enable": false 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "bdev_malloc_create", 00:20:21.230 "params": { 00:20:21.230 "name": "malloc0", 00:20:21.230 "num_blocks": 8192, 00:20:21.230 "block_size": 4096, 00:20:21.230 "physical_block_size": 4096, 00:20:21.230 "uuid": "0ff3347f-0f6c-46a1-8f47-eb6cbddd9174", 00:20:21.230 "optimal_io_boundary": 0, 00:20:21.230 "md_size": 0, 00:20:21.230 "dif_type": 0, 00:20:21.230 "dif_is_head_of_md": false, 00:20:21.230 "dif_pi_format": 0 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "bdev_wait_for_examine" 00:20:21.230 } 00:20:21.230 ] 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "subsystem": "nbd", 00:20:21.230 "config": [] 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "subsystem": "scheduler", 00:20:21.230 "config": [ 00:20:21.230 { 00:20:21.230 "method": "framework_set_scheduler", 00:20:21.230 "params": { 00:20:21.230 "name": "static" 00:20:21.230 } 00:20:21.230 } 00:20:21.230 ] 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "subsystem": "nvmf", 00:20:21.230 "config": [ 00:20:21.230 { 00:20:21.230 "method": "nvmf_set_config", 00:20:21.230 "params": { 00:20:21.230 "discovery_filter": "match_any", 00:20:21.230 "admin_cmd_passthru": { 00:20:21.230 "identify_ctrlr": false 00:20:21.230 } 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_set_max_subsystems", 00:20:21.230 "params": { 00:20:21.230 "max_subsystems": 1024 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_set_crdt", 00:20:21.230 "params": { 00:20:21.230 "crdt1": 0, 00:20:21.230 "crdt2": 0, 00:20:21.230 "crdt3": 0 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_create_transport", 00:20:21.230 "params": { 00:20:21.230 "trtype": "TCP", 00:20:21.230 "max_queue_depth": 128, 00:20:21.230 "max_io_qpairs_per_ctrlr": 127, 00:20:21.230 "in_capsule_data_size": 4096, 00:20:21.230 "max_io_size": 131072, 00:20:21.230 "io_unit_size": 131072, 00:20:21.230 "max_aq_depth": 128, 00:20:21.230 "num_shared_buffers": 511, 00:20:21.230 "buf_cache_size": 4294967295, 00:20:21.230 "dif_insert_or_strip": false, 00:20:21.230 "zcopy": false, 00:20:21.230 "c2h_success": false, 00:20:21.230 "sock_priority": 0, 00:20:21.230 "abort_timeout_sec": 1, 00:20:21.230 "ack_timeout": 0, 00:20:21.230 "data_wr_pool_size": 0 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_create_subsystem", 00:20:21.230 "params": { 00:20:21.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.230 "allow_any_host": false, 00:20:21.230 "serial_number": "00000000000000000000", 00:20:21.230 "model_number": "SPDK bdev Controller", 00:20:21.230 "max_namespaces": 32, 00:20:21.230 "min_cntlid": 1, 00:20:21.230 "max_cntlid": 65519, 00:20:21.230 "ana_reporting": false 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_subsystem_add_host", 00:20:21.230 "params": { 00:20:21.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.230 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.230 "psk": "key0" 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_subsystem_add_ns", 00:20:21.230 "params": { 00:20:21.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.230 "namespace": { 00:20:21.230 "nsid": 1, 00:20:21.230 "bdev_name": "malloc0", 00:20:21.230 "nguid": "0FF3347F0F6C46A18F47EB6CBDDD9174", 00:20:21.230 "uuid": "0ff3347f-0f6c-46a1-8f47-eb6cbddd9174", 00:20:21.230 "no_auto_visible": false 00:20:21.230 } 00:20:21.230 } 00:20:21.230 }, 00:20:21.230 { 00:20:21.230 "method": "nvmf_subsystem_add_listener", 00:20:21.230 "params": { 00:20:21.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.230 "listen_address": { 00:20:21.230 "trtype": "TCP", 00:20:21.230 "adrfam": "IPv4", 00:20:21.230 "traddr": "10.0.0.2", 00:20:21.230 "trsvcid": "4420" 00:20:21.230 }, 00:20:21.230 "secure_channel": false, 00:20:21.230 "sock_impl": "ssl" 00:20:21.230 } 00:20:21.230 } 00:20:21.230 ] 00:20:21.230 } 00:20:21.230 ] 00:20:21.230 }' 00:20:21.230 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:21.488 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:21.488 "subsystems": [ 00:20:21.488 { 00:20:21.488 "subsystem": "keyring", 00:20:21.488 "config": [ 00:20:21.488 { 00:20:21.488 "method": "keyring_file_add_key", 00:20:21.488 "params": { 00:20:21.488 "name": "key0", 00:20:21.488 "path": "/tmp/tmp.vPeRAgcRz1" 00:20:21.488 } 00:20:21.488 } 00:20:21.488 ] 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "subsystem": "iobuf", 00:20:21.488 "config": [ 00:20:21.488 { 00:20:21.488 "method": "iobuf_set_options", 00:20:21.488 "params": { 00:20:21.488 "small_pool_count": 8192, 00:20:21.488 "large_pool_count": 1024, 00:20:21.488 "small_bufsize": 8192, 00:20:21.488 "large_bufsize": 135168 00:20:21.488 } 00:20:21.488 } 00:20:21.488 ] 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "subsystem": "sock", 00:20:21.488 "config": [ 00:20:21.488 { 00:20:21.488 "method": "sock_set_default_impl", 00:20:21.488 "params": { 00:20:21.488 "impl_name": "posix" 00:20:21.488 } 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "method": "sock_impl_set_options", 00:20:21.488 "params": { 00:20:21.488 "impl_name": "ssl", 00:20:21.488 "recv_buf_size": 4096, 00:20:21.488 "send_buf_size": 4096, 00:20:21.488 "enable_recv_pipe": true, 00:20:21.488 "enable_quickack": false, 00:20:21.488 "enable_placement_id": 0, 00:20:21.488 "enable_zerocopy_send_server": true, 00:20:21.488 "enable_zerocopy_send_client": false, 00:20:21.488 "zerocopy_threshold": 0, 00:20:21.488 "tls_version": 0, 00:20:21.488 "enable_ktls": false 00:20:21.488 } 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "method": "sock_impl_set_options", 00:20:21.488 "params": { 00:20:21.488 "impl_name": "posix", 00:20:21.488 "recv_buf_size": 2097152, 00:20:21.488 "send_buf_size": 2097152, 00:20:21.488 "enable_recv_pipe": true, 00:20:21.488 "enable_quickack": false, 00:20:21.488 "enable_placement_id": 0, 00:20:21.488 "enable_zerocopy_send_server": true, 00:20:21.488 "enable_zerocopy_send_client": false, 00:20:21.488 "zerocopy_threshold": 0, 00:20:21.488 "tls_version": 0, 00:20:21.488 "enable_ktls": false 00:20:21.488 } 00:20:21.488 } 00:20:21.488 ] 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "subsystem": "vmd", 00:20:21.488 "config": [] 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "subsystem": "accel", 00:20:21.488 "config": [ 00:20:21.488 { 00:20:21.488 "method": "accel_set_options", 00:20:21.488 "params": { 00:20:21.488 "small_cache_size": 128, 00:20:21.488 "large_cache_size": 16, 00:20:21.488 "task_count": 2048, 00:20:21.488 "sequence_count": 2048, 00:20:21.488 "buf_count": 2048 00:20:21.488 } 00:20:21.488 } 00:20:21.488 ] 00:20:21.488 }, 00:20:21.488 { 00:20:21.488 "subsystem": "bdev", 00:20:21.488 "config": [ 00:20:21.488 { 00:20:21.488 "method": "bdev_set_options", 00:20:21.488 "params": { 00:20:21.488 "bdev_io_pool_size": 65535, 00:20:21.488 "bdev_io_cache_size": 256, 00:20:21.489 "bdev_auto_examine": true, 00:20:21.489 "iobuf_small_cache_size": 128, 00:20:21.489 "iobuf_large_cache_size": 16 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_raid_set_options", 00:20:21.489 "params": { 00:20:21.489 "process_window_size_kb": 1024, 00:20:21.489 "process_max_bandwidth_mb_sec": 0 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_iscsi_set_options", 00:20:21.489 "params": { 00:20:21.489 "timeout_sec": 30 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_nvme_set_options", 00:20:21.489 "params": { 00:20:21.489 "action_on_timeout": "none", 00:20:21.489 "timeout_us": 0, 00:20:21.489 "timeout_admin_us": 0, 00:20:21.489 "keep_alive_timeout_ms": 10000, 00:20:21.489 "arbitration_burst": 0, 00:20:21.489 "low_priority_weight": 0, 00:20:21.489 "medium_priority_weight": 0, 00:20:21.489 "high_priority_weight": 0, 00:20:21.489 "nvme_adminq_poll_period_us": 10000, 00:20:21.489 "nvme_ioq_poll_period_us": 0, 00:20:21.489 "io_queue_requests": 512, 00:20:21.489 "delay_cmd_submit": true, 00:20:21.489 "transport_retry_count": 4, 00:20:21.489 "bdev_retry_count": 3, 00:20:21.489 "transport_ack_timeout": 0, 00:20:21.489 "ctrlr_loss_timeout_sec": 0, 00:20:21.489 "reconnect_delay_sec": 0, 00:20:21.489 "fast_io_fail_timeout_sec": 0, 00:20:21.489 "disable_auto_failback": false, 00:20:21.489 "generate_uuids": false, 00:20:21.489 "transport_tos": 0, 00:20:21.489 "nvme_error_stat": false, 00:20:21.489 "rdma_srq_size": 0, 00:20:21.489 "io_path_stat": false, 00:20:21.489 "allow_accel_sequence": false, 00:20:21.489 "rdma_max_cq_size": 0, 00:20:21.489 "rdma_cm_event_timeout_ms": 0, 00:20:21.489 "dhchap_digests": [ 00:20:21.489 "sha256", 00:20:21.489 "sha384", 00:20:21.489 "sha512" 00:20:21.489 ], 00:20:21.489 "dhchap_dhgroups": [ 00:20:21.489 "null", 00:20:21.489 "ffdhe2048", 00:20:21.489 "ffdhe3072", 00:20:21.489 "ffdhe4096", 00:20:21.489 "ffdhe6144", 00:20:21.489 "ffdhe8192" 00:20:21.489 ] 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_nvme_attach_controller", 00:20:21.489 "params": { 00:20:21.489 "name": "nvme0", 00:20:21.489 "trtype": "TCP", 00:20:21.489 "adrfam": "IPv4", 00:20:21.489 "traddr": "10.0.0.2", 00:20:21.489 "trsvcid": "4420", 00:20:21.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.489 "prchk_reftag": false, 00:20:21.489 "prchk_guard": false, 00:20:21.489 "ctrlr_loss_timeout_sec": 0, 00:20:21.489 "reconnect_delay_sec": 0, 00:20:21.489 "fast_io_fail_timeout_sec": 0, 00:20:21.489 "psk": "key0", 00:20:21.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.489 "hdgst": false, 00:20:21.489 "ddgst": false 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_nvme_set_hotplug", 00:20:21.489 "params": { 00:20:21.489 "period_us": 100000, 00:20:21.489 "enable": false 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_enable_histogram", 00:20:21.489 "params": { 00:20:21.489 "name": "nvme0n1", 00:20:21.489 "enable": true 00:20:21.489 } 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "method": "bdev_wait_for_examine" 00:20:21.489 } 00:20:21.489 ] 00:20:21.489 }, 00:20:21.489 { 00:20:21.489 "subsystem": "nbd", 00:20:21.489 "config": [] 00:20:21.489 } 00:20:21.489 ] 00:20:21.489 }' 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2498152 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2498152 ']' 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2498152 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498152 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498152' 00:20:21.489 killing process with pid 2498152 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2498152 00:20:21.489 Received shutdown signal, test time was about 1.000000 seconds 00:20:21.489 00:20:21.489 Latency(us) 00:20:21.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.489 =================================================================================================================== 00:20:21.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.489 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2498152 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2498001 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2498001 ']' 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2498001 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498001 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498001' 00:20:21.746 killing process with pid 2498001 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2498001 00:20:21.746 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2498001 00:20:22.312 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:22.312 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:22.312 "subsystems": [ 00:20:22.312 { 00:20:22.312 "subsystem": "keyring", 00:20:22.312 "config": [ 00:20:22.312 { 00:20:22.312 "method": "keyring_file_add_key", 00:20:22.312 "params": { 00:20:22.312 "name": "key0", 00:20:22.312 "path": "/tmp/tmp.vPeRAgcRz1" 00:20:22.312 } 00:20:22.312 } 00:20:22.312 ] 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "subsystem": "iobuf", 00:20:22.312 "config": [ 00:20:22.312 { 00:20:22.312 "method": "iobuf_set_options", 00:20:22.312 "params": { 00:20:22.312 "small_pool_count": 8192, 00:20:22.312 "large_pool_count": 1024, 00:20:22.312 "small_bufsize": 8192, 00:20:22.312 "large_bufsize": 135168 00:20:22.312 } 00:20:22.312 } 00:20:22.312 ] 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "subsystem": "sock", 00:20:22.312 "config": [ 00:20:22.312 { 00:20:22.312 "method": "sock_set_default_impl", 00:20:22.312 "params": { 00:20:22.312 "impl_name": "posix" 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "method": "sock_impl_set_options", 00:20:22.312 "params": { 00:20:22.312 "impl_name": "ssl", 00:20:22.312 "recv_buf_size": 4096, 00:20:22.312 "send_buf_size": 4096, 00:20:22.312 "enable_recv_pipe": true, 00:20:22.312 "enable_quickack": false, 00:20:22.312 "enable_placement_id": 0, 00:20:22.312 "enable_zerocopy_send_server": true, 00:20:22.312 "enable_zerocopy_send_client": false, 00:20:22.312 "zerocopy_threshold": 0, 00:20:22.312 "tls_version": 0, 00:20:22.312 "enable_ktls": false 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "method": "sock_impl_set_options", 00:20:22.312 "params": { 00:20:22.312 "impl_name": "posix", 00:20:22.312 "recv_buf_size": 2097152, 00:20:22.312 "send_buf_size": 2097152, 00:20:22.312 "enable_recv_pipe": true, 00:20:22.312 "enable_quickack": false, 00:20:22.312 "enable_placement_id": 0, 00:20:22.312 "enable_zerocopy_send_server": true, 00:20:22.312 "enable_zerocopy_send_client": false, 00:20:22.312 "zerocopy_threshold": 0, 00:20:22.312 "tls_version": 0, 00:20:22.312 "enable_ktls": false 00:20:22.312 } 00:20:22.312 } 00:20:22.312 ] 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "subsystem": "vmd", 00:20:22.312 "config": [] 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "subsystem": "accel", 00:20:22.312 "config": [ 00:20:22.312 { 00:20:22.312 "method": "accel_set_options", 00:20:22.312 "params": { 00:20:22.312 "small_cache_size": 128, 00:20:22.312 "large_cache_size": 16, 00:20:22.312 "task_count": 2048, 00:20:22.312 "sequence_count": 2048, 00:20:22.312 "buf_count": 2048 00:20:22.312 } 00:20:22.312 } 00:20:22.312 ] 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "subsystem": "bdev", 00:20:22.312 "config": [ 00:20:22.312 { 00:20:22.312 "method": "bdev_set_options", 00:20:22.312 "params": { 00:20:22.312 "bdev_io_pool_size": 65535, 00:20:22.312 "bdev_io_cache_size": 256, 00:20:22.312 "bdev_auto_examine": true, 00:20:22.312 "iobuf_small_cache_size": 128, 00:20:22.312 "iobuf_large_cache_size": 16 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "method": "bdev_raid_set_options", 00:20:22.312 "params": { 00:20:22.312 "process_window_size_kb": 1024, 00:20:22.312 "process_max_bandwidth_mb_sec": 0 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "method": "bdev_iscsi_set_options", 00:20:22.312 "params": { 00:20:22.312 "timeout_sec": 30 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "method": "bdev_nvme_set_options", 00:20:22.312 "params": { 00:20:22.312 "action_on_timeout": "none", 00:20:22.312 "timeout_us": 0, 00:20:22.312 "timeout_admin_us": 0, 00:20:22.312 "keep_alive_timeout_ms": 10000, 00:20:22.312 "arbitration_burst": 0, 00:20:22.312 "low_priority_weight": 0, 00:20:22.312 "medium_priority_weight": 0, 00:20:22.312 "high_priority_weight": 0, 00:20:22.312 "nvme_adminq_poll_period_us": 10000, 00:20:22.312 "nvme_ioq_poll_period_us": 0, 00:20:22.312 "io_queue_requests": 0, 00:20:22.312 "delay_cmd_submit": true, 00:20:22.312 "transport_retry_count": 4, 00:20:22.312 "bdev_retry_count": 3, 00:20:22.312 "transport_ack_timeout": 0, 00:20:22.312 "ctrlr_loss_timeout_sec": 0, 00:20:22.312 "reconnect_delay_sec": 0, 00:20:22.312 "fast_io_fail_timeout_sec": 0, 00:20:22.312 "disable_auto_failback": false, 00:20:22.312 "generate_uuids": false, 00:20:22.312 "transport_tos": 0, 00:20:22.312 "nvme_error_stat": false, 00:20:22.312 "rdma_srq_size": 0, 00:20:22.312 "io_path_stat": false, 00:20:22.312 "allow_accel_sequence": false, 00:20:22.312 "rdma_max_cq_size": 0, 00:20:22.312 "rdma_cm_event_timeout_ms": 0, 00:20:22.312 "dhchap_digests": [ 00:20:22.312 "sha256", 00:20:22.312 "sha384", 00:20:22.312 "sha512" 00:20:22.312 ], 00:20:22.312 "dhchap_dhgroups": [ 00:20:22.312 "null", 00:20:22.312 "ffdhe2048", 00:20:22.312 "ffdhe3072", 00:20:22.312 "ffdhe4096", 00:20:22.312 "ffdhe6144", 00:20:22.312 "ffdhe8192" 00:20:22.312 ] 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.312 "method": "bdev_nvme_set_hotplug", 00:20:22.312 "params": { 00:20:22.312 "period_us": 100000, 00:20:22.312 "enable": false 00:20:22.312 } 00:20:22.312 }, 00:20:22.312 { 00:20:22.313 "method": "bdev_malloc_create", 00:20:22.313 "params": { 00:20:22.313 "name": "malloc0", 00:20:22.313 "num_blocks": 8192, 00:20:22.313 "block_size": 4096, 00:20:22.313 "physical_block_size": 4096, 00:20:22.313 "uuid": "0ff3347f-0f6c-46a1-8f47-eb6cbddd9174", 00:20:22.313 "optimal_io_boundary": 0, 00:20:22.313 "md_size": 0, 00:20:22.313 "dif_type": 0, 00:20:22.313 "dif_is_head_of_md": false, 00:20:22.313 "dif_pi_format": 0 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "bdev_wait_for_examine" 00:20:22.313 } 00:20:22.313 ] 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "subsystem": "nbd", 00:20:22.313 "config": [] 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "subsystem": "scheduler", 00:20:22.313 "config": [ 00:20:22.313 { 00:20:22.313 "method": "framework_set_scheduler", 00:20:22.313 "params": { 00:20:22.313 "name": "static" 00:20:22.313 } 00:20:22.313 } 00:20:22.313 ] 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "subsystem": "nvmf", 00:20:22.313 "config": [ 00:20:22.313 { 00:20:22.313 "method": "nvmf_set_config", 00:20:22.313 "params": { 00:20:22.313 "discovery_filter": "match_any", 00:20:22.313 "admin_cmd_passthru": { 00:20:22.313 "identify_ctrlr": false 00:20:22.313 } 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_set_max_subsystems", 00:20:22.313 "params": { 00:20:22.313 "max_subsystems": 1024 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_set_crdt", 00:20:22.313 "params": { 00:20:22.313 "crdt1": 0, 00:20:22.313 "crdt2": 0, 00:20:22.313 "crdt3": 0 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_create_transport", 00:20:22.313 "params": { 00:20:22.313 "trtype": "TCP", 00:20:22.313 "max_queue_depth": 128, 00:20:22.313 "max_io_qpairs_per_ctrlr": 127, 00:20:22.313 "in_capsule_data_size": 4096, 00:20:22.313 "max_io_size": 131072, 00:20:22.313 "io_unit_size": 131072, 00:20:22.313 "max_aq_depth": 128, 00:20:22.313 "num_shared_buffers": 511, 00:20:22.313 "buf_cache_size": 4294967295, 00:20:22.313 "dif_insert_or_strip": false, 00:20:22.313 "zcopy": false, 00:20:22.313 "c2h_success": false, 00:20:22.313 "sock_priority": 0, 00:20:22.313 "abort_timeout_sec": 1, 00:20:22.313 " 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.313 ack_timeout": 0, 00:20:22.313 "data_wr_pool_size": 0 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_create_subsystem", 00:20:22.313 "params": { 00:20:22.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.313 "allow_any_host": false, 00:20:22.313 "serial_number": "00000000000000000000", 00:20:22.313 "model_number": "SPDK bdev Controller", 00:20:22.313 "max_namespaces": 32, 00:20:22.313 "min_cntlid": 1, 00:20:22.313 "max_cntlid": 65519, 00:20:22.313 "ana_reporting": false 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_subsystem_add_host", 00:20:22.313 "params": { 00:20:22.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.313 "host": "nqn.2016-06.io.spdk:host1", 00:20:22.313 "psk": "key0" 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_subsystem_add_ns", 00:20:22.313 "params": { 00:20:22.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.313 "namespace": { 00:20:22.313 "nsid": 1, 00:20:22.313 "bdev_name": "malloc0", 00:20:22.313 "nguid": "0FF3347F0F6C46A18F47EB6CBDDD9174", 00:20:22.313 "uuid": "0ff3347f-0f6c-46a1-8f47-eb6cbddd9174", 00:20:22.313 "no_auto_visible": false 00:20:22.313 } 00:20:22.313 } 00:20:22.313 }, 00:20:22.313 { 00:20:22.313 "method": "nvmf_subsystem_add_listener", 00:20:22.313 "params": { 00:20:22.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.313 "listen_address": { 00:20:22.313 "trtype": "TCP", 00:20:22.313 "adrfam": "IPv4", 00:20:22.313 "traddr": "10.0.0.2", 00:20:22.313 "trsvcid": "4420" 00:20:22.313 }, 00:20:22.313 "secure_channel": false, 00:20:22.313 "sock_impl": "ssl" 00:20:22.313 } 00:20:22.313 } 00:20:22.313 ] 00:20:22.313 } 00:20:22.313 ] 00:20:22.313 }' 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2498560 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2498560 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2498560 ']' 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.313 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.313 [2024-07-25 07:25:54.591992] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:22.313 [2024-07-25 07:25:54.592069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.313 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.313 [2024-07-25 07:25:54.655208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.313 [2024-07-25 07:25:54.760219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.313 [2024-07-25 07:25:54.760295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.313 [2024-07-25 07:25:54.760319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.313 [2024-07-25 07:25:54.760329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.313 [2024-07-25 07:25:54.760339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.313 [2024-07-25 07:25:54.760406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.571 [2024-07-25 07:25:54.998805] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.571 [2024-07-25 07:25:55.036073] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.571 [2024-07-25 07:25:55.036390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2498708 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2498708 /var/tmp/bdevperf.sock 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2498708 ']' 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:23.136 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.137 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:23.137 "subsystems": [ 00:20:23.137 { 00:20:23.137 "subsystem": "keyring", 00:20:23.137 "config": [ 00:20:23.137 { 00:20:23.137 "method": "keyring_file_add_key", 00:20:23.137 "params": { 00:20:23.137 "name": "key0", 00:20:23.137 "path": "/tmp/tmp.vPeRAgcRz1" 00:20:23.137 } 00:20:23.137 } 00:20:23.137 ] 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "subsystem": "iobuf", 00:20:23.137 "config": [ 00:20:23.137 { 00:20:23.137 "method": "iobuf_set_options", 00:20:23.137 "params": { 00:20:23.137 "small_pool_count": 8192, 00:20:23.137 "large_pool_count": 1024, 00:20:23.137 "small_bufsize": 8192, 00:20:23.137 "large_bufsize": 135168 00:20:23.137 } 00:20:23.137 } 00:20:23.137 ] 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "subsystem": "sock", 00:20:23.137 "config": [ 00:20:23.137 { 00:20:23.137 "method": "sock_set_default_impl", 00:20:23.137 "params": { 00:20:23.137 "impl_name": "posix" 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "sock_impl_set_options", 00:20:23.137 "params": { 00:20:23.137 "impl_name": "ssl", 00:20:23.137 "recv_buf_size": 4096, 00:20:23.137 "send_buf_size": 4096, 00:20:23.137 "enable_recv_pipe": true, 00:20:23.137 "enable_quickack": false, 00:20:23.137 "enable_placement_id": 0, 00:20:23.137 "enable_zerocopy_send_server": true, 00:20:23.137 "enable_zerocopy_send_client": false, 00:20:23.137 "zerocopy_threshold": 0, 00:20:23.137 "tls_version": 0, 00:20:23.137 "enable_ktls": false 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "sock_impl_set_options", 00:20:23.137 "params": { 00:20:23.137 "impl_name": "posix", 00:20:23.137 "recv_buf_size": 2097152, 00:20:23.137 "send_buf_size": 2097152, 00:20:23.137 "enable_recv_pipe": true, 00:20:23.137 "enable_quickack": false, 00:20:23.137 "enable_placement_id": 0, 00:20:23.137 "enable_zerocopy_send_server": true, 00:20:23.137 "enable_zerocopy_send_client": false, 00:20:23.137 "zerocopy_threshold": 0, 00:20:23.137 "tls_version": 0, 00:20:23.137 "enable_ktls": false 00:20:23.137 } 00:20:23.137 } 00:20:23.137 ] 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "subsystem": "vmd", 00:20:23.137 "config": [] 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "subsystem": "accel", 00:20:23.137 "config": [ 00:20:23.137 { 00:20:23.137 "method": "accel_set_options", 00:20:23.137 "params": { 00:20:23.137 "small_cache_size": 128, 00:20:23.137 "large_cache_size": 16, 00:20:23.137 "task_count": 2048, 00:20:23.137 "sequence_count": 2048, 00:20:23.137 "buf_count": 2048 00:20:23.137 } 00:20:23.137 } 00:20:23.137 ] 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "subsystem": "bdev", 00:20:23.137 "config": [ 00:20:23.137 { 00:20:23.137 "method": "bdev_set_options", 00:20:23.137 "params": { 00:20:23.137 "bdev_io_pool_size": 65535, 00:20:23.137 "bdev_io_cache_size": 256, 00:20:23.137 "bdev_auto_examine": true, 00:20:23.137 "iobuf_small_cache_size": 128, 00:20:23.137 "iobuf_large_cache_size": 16 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_raid_set_options", 00:20:23.137 "params": { 00:20:23.137 "process_window_size_kb": 1024, 00:20:23.137 "process_max_bandwidth_mb_sec": 0 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_iscsi_set_options", 00:20:23.137 "params": { 00:20:23.137 "timeout_sec": 30 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_nvme_set_options", 00:20:23.137 "params": { 00:20:23.137 "action_on_timeout": "none", 00:20:23.137 "timeout_us": 0, 00:20:23.137 "timeout_admin_us": 0, 00:20:23.137 "keep_alive_timeout_ms": 10000, 00:20:23.137 "arbitration_burst": 0, 00:20:23.137 "low_priority_weight": 0, 00:20:23.137 "medium_priority_weight": 0, 00:20:23.137 "high_priority_weight": 0, 00:20:23.137 "nvme_adminq_poll_period_us": 10000, 00:20:23.137 "nvme_ioq_poll_period_us": 0, 00:20:23.137 "io_queue_requests": 512, 00:20:23.137 "delay_cmd_submit": true, 00:20:23.137 "transport_retry_count": 4, 00:20:23.137 "bdev_retry_count": 3, 00:20:23.137 "transport_ack_timeout": 0, 00:20:23.137 "ctrlr_loss_timeout_sec": 0, 00:20:23.137 "reconnect_delay_sec": 0, 00:20:23.137 "fast_io_fail_timeout_sec": 0, 00:20:23.137 "disable_auto_failback": false, 00:20:23.137 "generate_uuids": false, 00:20:23.137 "transport_tos": 0, 00:20:23.137 "nvme_error_stat": false, 00:20:23.137 "rdma_srq_size": 0, 00:20:23.137 "io_path_stat": false, 00:20:23.137 "allow_accel_sequence": false, 00:20:23.137 "rdma_max_cq_size": 0, 00:20:23.137 "rdma_cm_event_timeout_ms": 0, 00:20:23.137 "dhchap_digests": [ 00:20:23.137 "sha256", 00:20:23.137 "sha384", 00:20:23.137 "sha512" 00:20:23.137 ], 00:20:23.137 "dhchap_dhgroups": [ 00:20:23.137 "null", 00:20:23.137 "ffdhe2048", 00:20:23.137 "ffdhe3072", 00:20:23.137 "ffdhe4096", 00:20:23.137 "ffdhe6144", 00:20:23.137 "ffdhe8192" 00:20:23.137 ] 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_nvme_attach_controller", 00:20:23.137 "params": { 00:20:23.137 "name": "nvme0", 00:20:23.137 "trtype": "TCP", 00:20:23.137 "adrfam": "IPv4", 00:20:23.137 "traddr": "10.0.0.2", 00:20:23.137 "trsvcid": "4420", 00:20:23.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.137 "prchk_reftag": false, 00:20:23.137 "prchk_guard": false, 00:20:23.137 "ctrlr_loss_timeout_sec": 0, 00:20:23.137 "reconnect_delay_sec": 0, 00:20:23.137 "fast_io_fail_timeout_sec": 0, 00:20:23.137 "psk": "key0", 00:20:23.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.137 "hdgst": false, 00:20:23.137 "ddgst": false 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_nvme_set_hotplug", 00:20:23.137 "params": { 00:20:23.137 "period_us": 100000, 00:20:23.137 "enable": false 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_enable_histogram", 00:20:23.137 "params": { 00:20:23.137 "name": "nvme0n1", 00:20:23.137 "enable": true 00:20:23.137 } 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "method": "bdev_wait_for_examine" 00:20:23.137 } 00:20:23.137 ] 00:20:23.137 }, 00:20:23.137 { 00:20:23.137 "subsystem": "nbd", 00:20:23.137 "config": [] 00:20:23.137 } 00:20:23.137 ] 00:20:23.137 }' 00:20:23.137 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.137 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.137 [2024-07-25 07:25:55.665459] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:23.137 [2024-07-25 07:25:55.665548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498708 ] 00:20:23.395 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.395 [2024-07-25 07:25:55.727473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.395 [2024-07-25 07:25:55.846073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.653 [2024-07-25 07:25:56.032791] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.216 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.216 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:24.216 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:24.216 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:24.473 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.473 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:24.730 Running I/O for 1 seconds... 00:20:25.664 00:20:25.664 Latency(us) 00:20:25.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.664 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:25.664 Verification LBA range: start 0x0 length 0x2000 00:20:25.664 nvme0n1 : 1.04 2704.24 10.56 0.00 0.00 46485.37 6602.15 78837.38 00:20:25.665 =================================================================================================================== 00:20:25.665 Total : 2704.24 10.56 0.00 0.00 46485.37 6602.15 78837.38 00:20:25.665 0 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:25.665 nvmf_trace.0 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2498708 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2498708 ']' 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2498708 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498708 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498708' 00:20:25.665 killing process with pid 2498708 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2498708 00:20:25.665 Received shutdown signal, test time was about 1.000000 seconds 00:20:25.665 00:20:25.665 Latency(us) 00:20:25.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.665 =================================================================================================================== 00:20:25.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.665 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2498708 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.923 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.923 rmmod nvme_tcp 00:20:25.923 rmmod nvme_fabrics 00:20:26.180 rmmod nvme_keyring 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2498560 ']' 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2498560 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2498560 ']' 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2498560 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498560 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498560' 00:20:26.180 killing process with pid 2498560 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2498560 00:20:26.180 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2498560 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.439 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.337 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.337 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YVksELHHo5 /tmp/tmp.vaLfKhnuVo /tmp/tmp.vPeRAgcRz1 00:20:28.337 00:20:28.337 real 1m22.782s 00:20:28.337 user 2m12.584s 00:20:28.337 sys 0m27.300s 00:20:28.337 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.337 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.337 ************************************ 00:20:28.337 END TEST nvmf_tls 00:20:28.337 ************************************ 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.596 ************************************ 00:20:28.596 START TEST nvmf_fips 00:20:28.596 ************************************ 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.596 * Looking for test storage... 00:20:28.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.596 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:28.597 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:28.597 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:28.598 Error setting digest 00:20:28.598 00E2CBAA7E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:28.598 00E2CBAA7E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.598 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:30.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:30.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.523 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:30.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:30.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.524 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:20:30.781 00:20:30.781 --- 10.0.0.2 ping statistics --- 00:20:30.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.781 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:30.781 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:20:30.782 00:20:30.782 --- 10.0.0.1 ping statistics --- 00:20:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.782 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2500960 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2500960 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2500960 ']' 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.782 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.782 [2024-07-25 07:26:03.259275] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:30.782 [2024-07-25 07:26:03.259386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.782 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.039 [2024-07-25 07:26:03.328381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.039 [2024-07-25 07:26:03.447507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.039 [2024-07-25 07:26:03.447570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.039 [2024-07-25 07:26:03.447587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.039 [2024-07-25 07:26:03.447601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.039 [2024-07-25 07:26:03.447612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.039 [2024-07-25 07:26:03.447642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:31.973 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:31.973 [2024-07-25 07:26:04.483758] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.973 [2024-07-25 07:26:04.499762] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.973 [2024-07-25 07:26:04.500073] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.231 [2024-07-25 07:26:04.530696] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:32.231 malloc0 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2501230 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2501230 /var/tmp/bdevperf.sock 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2501230 ']' 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.231 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.231 [2024-07-25 07:26:04.622842] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:32.231 [2024-07-25 07:26:04.622935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501230 ] 00:20:32.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.231 [2024-07-25 07:26:04.680950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.489 [2024-07-25 07:26:04.789880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.421 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.421 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:33.421 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:33.421 [2024-07-25 07:26:05.853714] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.421 [2024-07-25 07:26:05.853839] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.421 TLSTESTn1 00:20:33.678 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.678 Running I/O for 10 seconds... 00:20:43.637 00:20:43.637 Latency(us) 00:20:43.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.637 Verification LBA range: start 0x0 length 0x2000 00:20:43.637 TLSTESTn1 : 10.04 2940.48 11.49 0.00 0.00 43426.44 7815.77 68739.98 00:20:43.637 =================================================================================================================== 00:20:43.637 Total : 2940.48 11.49 0.00 0.00 43426.44 7815.77 68739.98 00:20:43.637 0 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:43.637 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:43.637 nvmf_trace.0 00:20:43.894 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:43.894 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2501230 00:20:43.894 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2501230 ']' 00:20:43.894 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2501230 00:20:43.894 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:43.894 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.895 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2501230 00:20:43.895 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:43.895 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:43.895 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2501230' 00:20:43.895 killing process with pid 2501230 00:20:43.895 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2501230 00:20:43.895 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.895 00:20:43.895 Latency(us) 00:20:43.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.895 =================================================================================================================== 00:20:43.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.895 [2024-07-25 07:26:16.237385] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.895 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2501230 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:44.152 rmmod nvme_tcp 00:20:44.152 rmmod nvme_fabrics 00:20:44.152 rmmod nvme_keyring 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2500960 ']' 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2500960 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2500960 ']' 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2500960 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2500960 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2500960' 00:20:44.152 killing process with pid 2500960 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2500960 00:20:44.152 [2024-07-25 07:26:16.607005] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:44.152 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2500960 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.410 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.940 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:46.940 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:46.940 00:20:46.941 real 0m18.057s 00:20:46.941 user 0m23.629s 00:20:46.941 sys 0m6.096s 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:46.941 ************************************ 00:20:46.941 END TEST nvmf_fips 00:20:46.941 ************************************ 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.941 07:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.838 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:48.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:48.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:48.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:48.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.839 07:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.839 ************************************ 00:20:48.839 START TEST nvmf_perf_adq 00:20:48.839 ************************************ 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.839 * Looking for test storage... 00:20:48.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.839 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.737 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:50.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:50.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:50.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:50.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:50.738 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:51.007 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:52.960 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:58.224 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:58.225 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:58.225 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:58.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:58.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.225 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:20:58.226 00:20:58.226 --- 10.0.0.2 ping statistics --- 00:20:58.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.226 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:58.226 00:20:58.226 --- 10.0.0.1 ping statistics --- 00:20:58.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.226 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2507016 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2507016 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2507016 ']' 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.226 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.226 [2024-07-25 07:26:30.668492] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:20:58.226 [2024-07-25 07:26:30.668610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.226 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.226 [2024-07-25 07:26:30.738923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.483 [2024-07-25 07:26:30.857677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.483 [2024-07-25 07:26:30.857738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.483 [2024-07-25 07:26:30.857762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.484 [2024-07-25 07:26:30.857775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.484 [2024-07-25 07:26:30.857787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.484 [2024-07-25 07:26:30.857853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.484 [2024-07-25 07:26:30.857922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.484 [2024-07-25 07:26:30.858016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.484 [2024-07-25 07:26:30.858018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.413 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 [2024-07-25 07:26:31.807181] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 Malloc1 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 [2024-07-25 07:26:31.858779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2507259 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:59.414 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:59.414 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:01.938 "tick_rate": 2700000000, 00:21:01.938 "poll_groups": [ 00:21:01.938 { 00:21:01.938 "name": "nvmf_tgt_poll_group_000", 00:21:01.938 "admin_qpairs": 1, 00:21:01.938 "io_qpairs": 1, 00:21:01.938 "current_admin_qpairs": 1, 00:21:01.938 "current_io_qpairs": 1, 00:21:01.938 "pending_bdev_io": 0, 00:21:01.938 "completed_nvme_io": 20540, 00:21:01.938 "transports": [ 00:21:01.938 { 00:21:01.938 "trtype": "TCP" 00:21:01.938 } 00:21:01.938 ] 00:21:01.938 }, 00:21:01.938 { 00:21:01.938 "name": "nvmf_tgt_poll_group_001", 00:21:01.938 "admin_qpairs": 0, 00:21:01.938 "io_qpairs": 1, 00:21:01.938 "current_admin_qpairs": 0, 00:21:01.938 "current_io_qpairs": 1, 00:21:01.938 "pending_bdev_io": 0, 00:21:01.938 "completed_nvme_io": 20572, 00:21:01.938 "transports": [ 00:21:01.938 { 00:21:01.938 "trtype": "TCP" 00:21:01.938 } 00:21:01.938 ] 00:21:01.938 }, 00:21:01.938 { 00:21:01.938 "name": "nvmf_tgt_poll_group_002", 00:21:01.938 "admin_qpairs": 0, 00:21:01.938 "io_qpairs": 1, 00:21:01.938 "current_admin_qpairs": 0, 00:21:01.938 "current_io_qpairs": 1, 00:21:01.938 "pending_bdev_io": 0, 00:21:01.938 "completed_nvme_io": 20922, 00:21:01.938 "transports": [ 00:21:01.938 { 00:21:01.938 "trtype": "TCP" 00:21:01.938 } 00:21:01.938 ] 00:21:01.938 }, 00:21:01.938 { 00:21:01.938 "name": "nvmf_tgt_poll_group_003", 00:21:01.938 "admin_qpairs": 0, 00:21:01.938 "io_qpairs": 1, 00:21:01.938 "current_admin_qpairs": 0, 00:21:01.938 "current_io_qpairs": 1, 00:21:01.938 "pending_bdev_io": 0, 00:21:01.938 "completed_nvme_io": 17015, 00:21:01.938 "transports": [ 00:21:01.938 { 00:21:01.938 "trtype": "TCP" 00:21:01.938 } 00:21:01.938 ] 00:21:01.938 } 00:21:01.938 ] 00:21:01.938 }' 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:01.938 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2507259 00:21:10.039 Initializing NVMe Controllers 00:21:10.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:10.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:10.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:10.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:10.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:10.039 Initialization complete. Launching workers. 00:21:10.039 ======================================================== 00:21:10.039 Latency(us) 00:21:10.039 Device Information : IOPS MiB/s Average min max 00:21:10.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10984.14 42.91 5827.81 1113.62 9612.48 00:21:10.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10883.44 42.51 5881.69 1675.30 10291.60 00:21:10.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10845.64 42.37 5902.29 1765.05 10026.44 00:21:10.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8961.25 35.00 7141.63 1326.42 11623.73 00:21:10.039 ======================================================== 00:21:10.039 Total : 41674.46 162.79 6143.77 1113.62 11623.73 00:21:10.039 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.039 rmmod nvme_tcp 00:21:10.039 rmmod nvme_fabrics 00:21:10.039 rmmod nvme_keyring 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2507016 ']' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2507016 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2507016 ']' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2507016 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2507016 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2507016' 00:21:10.039 killing process with pid 2507016 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2507016 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2507016 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.039 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.936 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:11.937 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:11.937 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:12.870 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:14.767 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.056 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:20.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:20.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:20.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:20.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:21:20.057 00:21:20.057 --- 10.0.0.2 ping statistics --- 00:21:20.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.057 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:21:20.057 00:21:20.057 --- 10.0.0.1 ping statistics --- 00:21:20.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.057 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:20.057 net.core.busy_poll = 1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:20.057 net.core.busy_read = 1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.057 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2509877 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2509877 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2509877 ']' 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.058 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.058 [2024-07-25 07:26:52.437925] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:20.058 [2024-07-25 07:26:52.438029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.058 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.058 [2024-07-25 07:26:52.511740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.314 [2024-07-25 07:26:52.636441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.315 [2024-07-25 07:26:52.636497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.315 [2024-07-25 07:26:52.636528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.315 [2024-07-25 07:26:52.636540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.315 [2024-07-25 07:26:52.636550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.315 [2024-07-25 07:26:52.636625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.315 [2024-07-25 07:26:52.636706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.315 [2024-07-25 07:26:52.636755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.315 [2024-07-25 07:26:52.636758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.315 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 [2024-07-25 07:26:52.869311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 Malloc1 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 [2024-07-25 07:26:52.922646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2509913 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:20.572 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:20.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.466 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:22.466 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.466 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.466 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.466 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:22.466 "tick_rate": 2700000000, 00:21:22.466 "poll_groups": [ 00:21:22.466 { 00:21:22.466 "name": "nvmf_tgt_poll_group_000", 00:21:22.466 "admin_qpairs": 1, 00:21:22.466 "io_qpairs": 1, 00:21:22.466 "current_admin_qpairs": 1, 00:21:22.466 "current_io_qpairs": 1, 00:21:22.466 "pending_bdev_io": 0, 00:21:22.466 "completed_nvme_io": 21115, 00:21:22.466 "transports": [ 00:21:22.466 { 00:21:22.466 "trtype": "TCP" 00:21:22.466 } 00:21:22.466 ] 00:21:22.466 }, 00:21:22.466 { 00:21:22.466 "name": "nvmf_tgt_poll_group_001", 00:21:22.466 "admin_qpairs": 0, 00:21:22.466 "io_qpairs": 3, 00:21:22.466 "current_admin_qpairs": 0, 00:21:22.466 "current_io_qpairs": 3, 00:21:22.466 "pending_bdev_io": 0, 00:21:22.466 "completed_nvme_io": 27711, 00:21:22.466 "transports": [ 00:21:22.466 { 00:21:22.466 "trtype": "TCP" 00:21:22.466 } 00:21:22.466 ] 00:21:22.466 }, 00:21:22.466 { 00:21:22.466 "name": "nvmf_tgt_poll_group_002", 00:21:22.466 "admin_qpairs": 0, 00:21:22.466 "io_qpairs": 0, 00:21:22.466 "current_admin_qpairs": 0, 00:21:22.466 "current_io_qpairs": 0, 00:21:22.466 "pending_bdev_io": 0, 00:21:22.466 "completed_nvme_io": 0, 00:21:22.466 "transports": [ 00:21:22.466 { 00:21:22.466 "trtype": "TCP" 00:21:22.466 } 00:21:22.466 ] 00:21:22.466 }, 00:21:22.466 { 00:21:22.466 "name": "nvmf_tgt_poll_group_003", 00:21:22.466 "admin_qpairs": 0, 00:21:22.466 "io_qpairs": 0, 00:21:22.466 "current_admin_qpairs": 0, 00:21:22.466 "current_io_qpairs": 0, 00:21:22.466 "pending_bdev_io": 0, 00:21:22.466 "completed_nvme_io": 0, 00:21:22.466 "transports": [ 00:21:22.466 { 00:21:22.466 "trtype": "TCP" 00:21:22.466 } 00:21:22.466 ] 00:21:22.466 } 00:21:22.467 ] 00:21:22.467 }' 00:21:22.467 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:22.467 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:22.467 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:22.467 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:22.467 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2509913 00:21:32.424 Initializing NVMe Controllers 00:21:32.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:32.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:32.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:32.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:32.424 Initialization complete. Launching workers. 00:21:32.424 ======================================================== 00:21:32.424 Latency(us) 00:21:32.424 Device Information : IOPS MiB/s Average min max 00:21:32.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11180.60 43.67 5725.11 1807.19 9868.11 00:21:32.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4767.90 18.62 13423.39 2226.41 63137.45 00:21:32.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4805.60 18.77 13318.65 2172.28 61081.45 00:21:32.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5008.20 19.56 12828.67 2208.51 61447.56 00:21:32.424 ======================================================== 00:21:32.424 Total : 25762.29 100.63 9947.26 1807.19 63137.45 00:21:32.424 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.424 rmmod nvme_tcp 00:21:32.424 rmmod nvme_fabrics 00:21:32.424 rmmod nvme_keyring 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2509877 ']' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2509877 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2509877 ']' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2509877 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2509877 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2509877' 00:21:32.424 killing process with pid 2509877 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2509877 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2509877 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.424 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.988 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.988 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:32.988 00:21:32.988 real 0m44.504s 00:21:32.988 user 2m40.813s 00:21:32.988 sys 0m10.297s 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.246 ************************************ 00:21:33.246 END TEST nvmf_perf_adq 00:21:33.246 ************************************ 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.246 ************************************ 00:21:33.246 START TEST nvmf_shutdown 00:21:33.246 ************************************ 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:33.246 * Looking for test storage... 00:21:33.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:33.246 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.247 ************************************ 00:21:33.247 START TEST nvmf_shutdown_tc1 00:21:33.247 ************************************ 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.247 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:35.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:35.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:35.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:35.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.145 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.403 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.403 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.403 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:21:35.404 00:21:35.404 --- 10.0.0.2 ping statistics --- 00:21:35.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.404 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:21:35.404 00:21:35.404 --- 10.0.0.1 ping statistics --- 00:21:35.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.404 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2513430 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2513430 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2513430 ']' 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.404 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.404 [2024-07-25 07:27:07.828289] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:35.404 [2024-07-25 07:27:07.828377] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.404 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.404 [2024-07-25 07:27:07.896102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.662 [2024-07-25 07:27:08.020070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.662 [2024-07-25 07:27:08.020132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.662 [2024-07-25 07:27:08.020149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.662 [2024-07-25 07:27:08.020162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.662 [2024-07-25 07:27:08.020174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.662 [2024-07-25 07:27:08.020290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.662 [2024-07-25 07:27:08.020318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.662 [2024-07-25 07:27:08.020377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.662 [2024-07-25 07:27:08.020380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.662 [2024-07-25 07:27:08.179439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.662 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.920 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.920 Malloc1 00:21:35.920 [2024-07-25 07:27:08.254167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.920 Malloc2 00:21:35.920 Malloc3 00:21:35.920 Malloc4 00:21:35.920 Malloc5 00:21:36.178 Malloc6 00:21:36.178 Malloc7 00:21:36.178 Malloc8 00:21:36.178 Malloc9 00:21:36.178 Malloc10 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2513781 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2513781 /var/tmp/bdevperf.sock 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2513781 ']' 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:36.178 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.178 { 00:21:36.178 "params": { 00:21:36.178 "name": "Nvme$subsystem", 00:21:36.178 "trtype": "$TEST_TRANSPORT", 00:21:36.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.178 "adrfam": "ipv4", 00:21:36.178 "trsvcid": "$NVMF_PORT", 00:21:36.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.178 "hdgst": ${hdgst:-false}, 00:21:36.178 "ddgst": ${ddgst:-false} 00:21:36.178 }, 00:21:36.178 "method": "bdev_nvme_attach_controller" 00:21:36.178 } 00:21:36.178 EOF 00:21:36.178 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.437 EOF 00:21:36.437 )") 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.437 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.437 { 00:21:36.437 "params": { 00:21:36.437 "name": "Nvme$subsystem", 00:21:36.437 "trtype": "$TEST_TRANSPORT", 00:21:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.437 "adrfam": "ipv4", 00:21:36.437 "trsvcid": "$NVMF_PORT", 00:21:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.437 "hdgst": ${hdgst:-false}, 00:21:36.437 "ddgst": ${ddgst:-false} 00:21:36.437 }, 00:21:36.437 "method": "bdev_nvme_attach_controller" 00:21:36.437 } 00:21:36.438 EOF 00:21:36.438 )") 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.438 { 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme$subsystem", 00:21:36.438 "trtype": "$TEST_TRANSPORT", 00:21:36.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "$NVMF_PORT", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.438 "hdgst": ${hdgst:-false}, 00:21:36.438 "ddgst": ${ddgst:-false} 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 } 00:21:36.438 EOF 00:21:36.438 )") 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:36.438 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme1", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme2", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme3", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme4", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme5", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme6", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme7", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme8", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme9", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 },{ 00:21:36.438 "params": { 00:21:36.438 "name": "Nvme10", 00:21:36.438 "trtype": "tcp", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "adrfam": "ipv4", 00:21:36.438 "trsvcid": "4420", 00:21:36.438 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.438 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.438 "hdgst": false, 00:21:36.438 "ddgst": false 00:21:36.438 }, 00:21:36.438 "method": "bdev_nvme_attach_controller" 00:21:36.438 }' 00:21:36.438 [2024-07-25 07:27:08.747992] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:36.438 [2024-07-25 07:27:08.748069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:36.438 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.438 [2024-07-25 07:27:08.813805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.438 [2024-07-25 07:27:08.923969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2513781 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:38.334 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:39.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2513781 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2513430 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.266 { 00:21:39.266 "params": { 00:21:39.266 "name": "Nvme$subsystem", 00:21:39.266 "trtype": "$TEST_TRANSPORT", 00:21:39.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.266 "adrfam": "ipv4", 00:21:39.266 "trsvcid": "$NVMF_PORT", 00:21:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.266 "hdgst": ${hdgst:-false}, 00:21:39.266 "ddgst": ${ddgst:-false} 00:21:39.266 }, 00:21:39.266 "method": "bdev_nvme_attach_controller" 00:21:39.266 } 00:21:39.266 EOF 00:21:39.266 )") 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.266 { 00:21:39.266 "params": { 00:21:39.266 "name": "Nvme$subsystem", 00:21:39.266 "trtype": "$TEST_TRANSPORT", 00:21:39.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.266 "adrfam": "ipv4", 00:21:39.266 "trsvcid": "$NVMF_PORT", 00:21:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.266 "hdgst": ${hdgst:-false}, 00:21:39.266 "ddgst": ${ddgst:-false} 00:21:39.266 }, 00:21:39.266 "method": "bdev_nvme_attach_controller" 00:21:39.266 } 00:21:39.266 EOF 00:21:39.266 )") 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.266 { 00:21:39.266 "params": { 00:21:39.266 "name": "Nvme$subsystem", 00:21:39.266 "trtype": "$TEST_TRANSPORT", 00:21:39.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.266 "adrfam": "ipv4", 00:21:39.266 "trsvcid": "$NVMF_PORT", 00:21:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.266 "hdgst": ${hdgst:-false}, 00:21:39.266 "ddgst": ${ddgst:-false} 00:21:39.266 }, 00:21:39.266 "method": "bdev_nvme_attach_controller" 00:21:39.266 } 00:21:39.266 EOF 00:21:39.266 )") 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.266 { 00:21:39.266 "params": { 00:21:39.266 "name": "Nvme$subsystem", 00:21:39.266 "trtype": "$TEST_TRANSPORT", 00:21:39.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.266 "adrfam": "ipv4", 00:21:39.266 "trsvcid": "$NVMF_PORT", 00:21:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.266 "hdgst": ${hdgst:-false}, 00:21:39.266 "ddgst": ${ddgst:-false} 00:21:39.266 }, 00:21:39.266 "method": "bdev_nvme_attach_controller" 00:21:39.266 } 00:21:39.266 EOF 00:21:39.266 )") 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.266 { 00:21:39.266 "params": { 00:21:39.266 "name": "Nvme$subsystem", 00:21:39.266 "trtype": "$TEST_TRANSPORT", 00:21:39.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.266 "adrfam": "ipv4", 00:21:39.266 "trsvcid": "$NVMF_PORT", 00:21:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.266 "hdgst": ${hdgst:-false}, 00:21:39.266 "ddgst": ${ddgst:-false} 00:21:39.266 }, 00:21:39.266 "method": "bdev_nvme_attach_controller" 00:21:39.266 } 00:21:39.266 EOF 00:21:39.266 )") 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.266 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.266 { 00:21:39.266 "params": { 00:21:39.267 "name": "Nvme$subsystem", 00:21:39.267 "trtype": "$TEST_TRANSPORT", 00:21:39.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "$NVMF_PORT", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.267 "hdgst": ${hdgst:-false}, 00:21:39.267 "ddgst": ${ddgst:-false} 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 } 00:21:39.267 EOF 00:21:39.267 )") 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.267 { 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme$subsystem", 00:21:39.267 "trtype": "$TEST_TRANSPORT", 00:21:39.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "$NVMF_PORT", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.267 "hdgst": ${hdgst:-false}, 00:21:39.267 "ddgst": ${ddgst:-false} 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 } 00:21:39.267 EOF 00:21:39.267 )") 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.267 { 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme$subsystem", 00:21:39.267 "trtype": "$TEST_TRANSPORT", 00:21:39.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "$NVMF_PORT", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.267 "hdgst": ${hdgst:-false}, 00:21:39.267 "ddgst": ${ddgst:-false} 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 } 00:21:39.267 EOF 00:21:39.267 )") 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.267 { 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme$subsystem", 00:21:39.267 "trtype": "$TEST_TRANSPORT", 00:21:39.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "$NVMF_PORT", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.267 "hdgst": ${hdgst:-false}, 00:21:39.267 "ddgst": ${ddgst:-false} 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 } 00:21:39.267 EOF 00:21:39.267 )") 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.267 { 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme$subsystem", 00:21:39.267 "trtype": "$TEST_TRANSPORT", 00:21:39.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "$NVMF_PORT", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.267 "hdgst": ${hdgst:-false}, 00:21:39.267 "ddgst": ${ddgst:-false} 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 } 00:21:39.267 EOF 00:21:39.267 )") 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:39.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme1", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme2", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme3", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme4", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme5", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme6", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme7", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme8", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme9", 00:21:39.267 "trtype": "tcp", 00:21:39.267 "traddr": "10.0.0.2", 00:21:39.267 "adrfam": "ipv4", 00:21:39.267 "trsvcid": "4420", 00:21:39.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:39.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:39.267 "hdgst": false, 00:21:39.267 "ddgst": false 00:21:39.267 }, 00:21:39.267 "method": "bdev_nvme_attach_controller" 00:21:39.267 },{ 00:21:39.267 "params": { 00:21:39.267 "name": "Nvme10", 00:21:39.268 "trtype": "tcp", 00:21:39.268 "traddr": "10.0.0.2", 00:21:39.268 "adrfam": "ipv4", 00:21:39.268 "trsvcid": "4420", 00:21:39.268 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:39.268 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:39.268 "hdgst": false, 00:21:39.268 "ddgst": false 00:21:39.268 }, 00:21:39.268 "method": "bdev_nvme_attach_controller" 00:21:39.268 }' 00:21:39.268 [2024-07-25 07:27:11.762650] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:39.268 [2024-07-25 07:27:11.762727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514287 ] 00:21:39.268 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.525 [2024-07-25 07:27:11.826595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.525 [2024-07-25 07:27:11.936115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.450 Running I/O for 1 seconds... 00:21:42.383 00:21:42.383 Latency(us) 00:21:42.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme1n1 : 1.07 239.41 14.96 0.00 0.00 263880.63 18252.99 251658.24 00:21:42.383 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme2n1 : 1.14 168.42 10.53 0.00 0.00 370023.03 48739.37 309135.74 00:21:42.383 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme3n1 : 1.10 233.32 14.58 0.00 0.00 262369.85 18447.17 251658.24 00:21:42.383 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme4n1 : 1.16 276.84 17.30 0.00 0.00 217104.69 19806.44 240784.12 00:21:42.383 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme5n1 : 1.13 225.85 14.12 0.00 0.00 262314.29 18738.44 259425.47 00:21:42.383 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme6n1 : 1.17 218.17 13.64 0.00 0.00 267231.00 22622.06 320009.86 00:21:42.383 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme7n1 : 1.15 285.75 17.86 0.00 0.00 199533.44 4951.61 225249.66 00:21:42.383 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme8n1 : 1.16 274.69 17.17 0.00 0.00 205249.50 14951.92 237677.23 00:21:42.383 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme9n1 : 1.15 222.81 13.93 0.00 0.00 248252.68 20291.89 267192.70 00:21:42.383 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.383 Verification LBA range: start 0x0 length 0x400 00:21:42.383 Nvme10n1 : 1.18 219.55 13.72 0.00 0.00 248582.31 1189.36 284280.60 00:21:42.383 =================================================================================================================== 00:21:42.383 Total : 2364.83 147.80 0.00 0.00 248171.66 1189.36 320009.86 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.641 rmmod nvme_tcp 00:21:42.641 rmmod nvme_fabrics 00:21:42.641 rmmod nvme_keyring 00:21:42.641 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2513430 ']' 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2513430 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2513430 ']' 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2513430 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513430 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513430' 00:21:42.898 killing process with pid 2513430 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2513430 00:21:42.898 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2513430 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.464 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.365 00:21:45.365 real 0m12.082s 00:21:45.365 user 0m35.273s 00:21:45.365 sys 0m3.242s 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.365 ************************************ 00:21:45.365 END TEST nvmf_shutdown_tc1 00:21:45.365 ************************************ 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.365 ************************************ 00:21:45.365 START TEST nvmf_shutdown_tc2 00:21:45.365 ************************************ 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:45.365 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:45.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:45.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:45.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:45.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.366 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.367 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.367 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.367 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.367 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:45.625 00:21:45.625 --- 10.0.0.2 ping statistics --- 00:21:45.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.625 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:45.625 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:21:45.625 00:21:45.625 --- 10.0.0.1 ping statistics --- 00:21:45.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.625 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2515062 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2515062 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2515062 ']' 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.626 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.626 [2024-07-25 07:27:18.043736] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:45.626 [2024-07-25 07:27:18.043859] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.626 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.626 [2024-07-25 07:27:18.117505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.883 [2024-07-25 07:27:18.237391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.883 [2024-07-25 07:27:18.237452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.883 [2024-07-25 07:27:18.237477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.883 [2024-07-25 07:27:18.237491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.883 [2024-07-25 07:27:18.237504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.883 [2024-07-25 07:27:18.237600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.883 [2024-07-25 07:27:18.237712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.883 [2024-07-25 07:27:18.237780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.884 [2024-07-25 07:27:18.237783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.814 [2024-07-25 07:27:19.023797] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.814 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.814 Malloc1 00:21:46.814 [2024-07-25 07:27:19.098630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.814 Malloc2 00:21:46.814 Malloc3 00:21:46.814 Malloc4 00:21:46.814 Malloc5 00:21:46.814 Malloc6 00:21:47.072 Malloc7 00:21:47.072 Malloc8 00:21:47.072 Malloc9 00:21:47.072 Malloc10 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2515363 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2515363 /var/tmp/bdevperf.sock 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2515363 ']' 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.072 { 00:21:47.072 "params": { 00:21:47.072 "name": "Nvme$subsystem", 00:21:47.072 "trtype": "$TEST_TRANSPORT", 00:21:47.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.072 "adrfam": "ipv4", 00:21:47.072 "trsvcid": "$NVMF_PORT", 00:21:47.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.072 "hdgst": ${hdgst:-false}, 00:21:47.072 "ddgst": ${ddgst:-false} 00:21:47.072 }, 00:21:47.072 "method": "bdev_nvme_attach_controller" 00:21:47.072 } 00:21:47.072 EOF 00:21:47.072 )") 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.072 { 00:21:47.072 "params": { 00:21:47.072 "name": "Nvme$subsystem", 00:21:47.072 "trtype": "$TEST_TRANSPORT", 00:21:47.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.072 "adrfam": "ipv4", 00:21:47.072 "trsvcid": "$NVMF_PORT", 00:21:47.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.072 "hdgst": ${hdgst:-false}, 00:21:47.072 "ddgst": ${ddgst:-false} 00:21:47.072 }, 00:21:47.072 "method": "bdev_nvme_attach_controller" 00:21:47.072 } 00:21:47.072 EOF 00:21:47.072 )") 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.072 { 00:21:47.072 "params": { 00:21:47.072 "name": "Nvme$subsystem", 00:21:47.072 "trtype": "$TEST_TRANSPORT", 00:21:47.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.072 "adrfam": "ipv4", 00:21:47.072 "trsvcid": "$NVMF_PORT", 00:21:47.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.072 "hdgst": ${hdgst:-false}, 00:21:47.072 "ddgst": ${ddgst:-false} 00:21:47.072 }, 00:21:47.072 "method": "bdev_nvme_attach_controller" 00:21:47.072 } 00:21:47.072 EOF 00:21:47.072 )") 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.072 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.072 { 00:21:47.072 "params": { 00:21:47.072 "name": "Nvme$subsystem", 00:21:47.072 "trtype": "$TEST_TRANSPORT", 00:21:47.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.072 "adrfam": "ipv4", 00:21:47.072 "trsvcid": "$NVMF_PORT", 00:21:47.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.072 "hdgst": ${hdgst:-false}, 00:21:47.072 "ddgst": ${ddgst:-false} 00:21:47.072 }, 00:21:47.072 "method": "bdev_nvme_attach_controller" 00:21:47.072 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.073 { 00:21:47.073 "params": { 00:21:47.073 "name": "Nvme$subsystem", 00:21:47.073 "trtype": "$TEST_TRANSPORT", 00:21:47.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.073 "adrfam": "ipv4", 00:21:47.073 "trsvcid": "$NVMF_PORT", 00:21:47.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.073 "hdgst": ${hdgst:-false}, 00:21:47.073 "ddgst": ${ddgst:-false} 00:21:47.073 }, 00:21:47.073 "method": "bdev_nvme_attach_controller" 00:21:47.073 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.073 { 00:21:47.073 "params": { 00:21:47.073 "name": "Nvme$subsystem", 00:21:47.073 "trtype": "$TEST_TRANSPORT", 00:21:47.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.073 "adrfam": "ipv4", 00:21:47.073 "trsvcid": "$NVMF_PORT", 00:21:47.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.073 "hdgst": ${hdgst:-false}, 00:21:47.073 "ddgst": ${ddgst:-false} 00:21:47.073 }, 00:21:47.073 "method": "bdev_nvme_attach_controller" 00:21:47.073 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.073 { 00:21:47.073 "params": { 00:21:47.073 "name": "Nvme$subsystem", 00:21:47.073 "trtype": "$TEST_TRANSPORT", 00:21:47.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.073 "adrfam": "ipv4", 00:21:47.073 "trsvcid": "$NVMF_PORT", 00:21:47.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.073 "hdgst": ${hdgst:-false}, 00:21:47.073 "ddgst": ${ddgst:-false} 00:21:47.073 }, 00:21:47.073 "method": "bdev_nvme_attach_controller" 00:21:47.073 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.073 { 00:21:47.073 "params": { 00:21:47.073 "name": "Nvme$subsystem", 00:21:47.073 "trtype": "$TEST_TRANSPORT", 00:21:47.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.073 "adrfam": "ipv4", 00:21:47.073 "trsvcid": "$NVMF_PORT", 00:21:47.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.073 "hdgst": ${hdgst:-false}, 00:21:47.073 "ddgst": ${ddgst:-false} 00:21:47.073 }, 00:21:47.073 "method": "bdev_nvme_attach_controller" 00:21:47.073 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.073 { 00:21:47.073 "params": { 00:21:47.073 "name": "Nvme$subsystem", 00:21:47.073 "trtype": "$TEST_TRANSPORT", 00:21:47.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.073 "adrfam": "ipv4", 00:21:47.073 "trsvcid": "$NVMF_PORT", 00:21:47.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.073 "hdgst": ${hdgst:-false}, 00:21:47.073 "ddgst": ${ddgst:-false} 00:21:47.073 }, 00:21:47.073 "method": "bdev_nvme_attach_controller" 00:21:47.073 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.073 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.073 { 00:21:47.073 "params": { 00:21:47.073 "name": "Nvme$subsystem", 00:21:47.073 "trtype": "$TEST_TRANSPORT", 00:21:47.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.073 "adrfam": "ipv4", 00:21:47.073 "trsvcid": "$NVMF_PORT", 00:21:47.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.073 "hdgst": ${hdgst:-false}, 00:21:47.073 "ddgst": ${ddgst:-false} 00:21:47.073 }, 00:21:47.073 "method": "bdev_nvme_attach_controller" 00:21:47.073 } 00:21:47.073 EOF 00:21:47.073 )") 00:21:47.331 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:47.331 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:47.331 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:47.331 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme1", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme2", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme3", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme4", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme5", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme6", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme7", 00:21:47.331 "trtype": "tcp", 00:21:47.331 "traddr": "10.0.0.2", 00:21:47.331 "adrfam": "ipv4", 00:21:47.331 "trsvcid": "4420", 00:21:47.331 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:47.331 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:47.331 "hdgst": false, 00:21:47.331 "ddgst": false 00:21:47.331 }, 00:21:47.331 "method": "bdev_nvme_attach_controller" 00:21:47.331 },{ 00:21:47.331 "params": { 00:21:47.331 "name": "Nvme8", 00:21:47.331 "trtype": "tcp", 00:21:47.332 "traddr": "10.0.0.2", 00:21:47.332 "adrfam": "ipv4", 00:21:47.332 "trsvcid": "4420", 00:21:47.332 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:47.332 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:47.332 "hdgst": false, 00:21:47.332 "ddgst": false 00:21:47.332 }, 00:21:47.332 "method": "bdev_nvme_attach_controller" 00:21:47.332 },{ 00:21:47.332 "params": { 00:21:47.332 "name": "Nvme9", 00:21:47.332 "trtype": "tcp", 00:21:47.332 "traddr": "10.0.0.2", 00:21:47.332 "adrfam": "ipv4", 00:21:47.332 "trsvcid": "4420", 00:21:47.332 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:47.332 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:47.332 "hdgst": false, 00:21:47.332 "ddgst": false 00:21:47.332 }, 00:21:47.332 "method": "bdev_nvme_attach_controller" 00:21:47.332 },{ 00:21:47.332 "params": { 00:21:47.332 "name": "Nvme10", 00:21:47.332 "trtype": "tcp", 00:21:47.332 "traddr": "10.0.0.2", 00:21:47.332 "adrfam": "ipv4", 00:21:47.332 "trsvcid": "4420", 00:21:47.332 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:47.332 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:47.332 "hdgst": false, 00:21:47.332 "ddgst": false 00:21:47.332 }, 00:21:47.332 "method": "bdev_nvme_attach_controller" 00:21:47.332 }' 00:21:47.332 [2024-07-25 07:27:19.614802] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:47.332 [2024-07-25 07:27:19.614892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515363 ] 00:21:47.332 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.332 [2024-07-25 07:27:19.678715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.332 [2024-07-25 07:27:19.788994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.703 Running I/O for 10 seconds... 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:49.304 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:49.561 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2515363 00:21:49.562 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2515363 ']' 00:21:49.562 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2515363 00:21:49.562 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:49.562 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.562 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515363 00:21:49.562 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.562 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.562 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515363' 00:21:49.562 killing process with pid 2515363 00:21:49.562 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2515363 00:21:49.562 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2515363 00:21:49.819 Received shutdown signal, test time was about 0.875490 seconds 00:21:49.819 00:21:49.819 Latency(us) 00:21:49.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.819 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme1n1 : 0.84 232.18 14.51 0.00 0.00 271091.11 2852.03 250104.79 00:21:49.819 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme2n1 : 0.85 232.57 14.54 0.00 0.00 263168.81 3762.25 254765.13 00:21:49.819 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme3n1 : 0.83 239.31 14.96 0.00 0.00 249496.66 2002.49 254765.13 00:21:49.819 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme4n1 : 0.87 294.35 18.40 0.00 0.00 200912.78 19126.80 237677.23 00:21:49.819 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme5n1 : 0.86 222.64 13.91 0.00 0.00 259461.37 21942.42 250104.79 00:21:49.819 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme6n1 : 0.87 219.51 13.72 0.00 0.00 257524.12 20777.34 299815.06 00:21:49.819 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme7n1 : 0.84 233.64 14.60 0.00 0.00 232980.98 5485.61 236123.78 00:21:49.819 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme8n1 : 0.86 224.26 14.02 0.00 0.00 238703.25 20486.07 256318.58 00:21:49.819 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme9n1 : 0.86 224.03 14.00 0.00 0.00 233410.81 22136.60 260978.92 00:21:49.819 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.819 Verification LBA range: start 0x0 length 0x400 00:21:49.819 Nvme10n1 : 0.87 221.77 13.86 0.00 0.00 230804.04 20000.62 264085.81 00:21:49.819 =================================================================================================================== 00:21:49.819 Total : 2344.26 146.52 0.00 0.00 242485.21 2002.49 299815.06 00:21:50.076 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2515062 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.013 rmmod nvme_tcp 00:21:51.013 rmmod nvme_fabrics 00:21:51.013 rmmod nvme_keyring 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2515062 ']' 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2515062 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2515062 ']' 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2515062 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515062 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515062' 00:21:51.013 killing process with pid 2515062 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2515062 00:21:51.013 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2515062 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.582 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.109 00:21:54.109 real 0m8.239s 00:21:54.109 user 0m25.332s 00:21:54.109 sys 0m1.476s 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.109 ************************************ 00:21:54.109 END TEST nvmf_shutdown_tc2 00:21:54.109 ************************************ 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.109 ************************************ 00:21:54.109 START TEST nvmf_shutdown_tc3 00:21:54.109 ************************************ 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:54.109 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:54.110 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:54.110 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:54.110 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:54.110 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:21:54.110 00:21:54.110 --- 10.0.0.2 ping statistics --- 00:21:54.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.110 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:21:54.110 00:21:54.110 --- 10.0.0.1 ping statistics --- 00:21:54.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.110 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.110 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2516274 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2516274 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2516274 ']' 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.111 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.111 [2024-07-25 07:27:26.328946] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:54.111 [2024-07-25 07:27:26.329032] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.111 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.111 [2024-07-25 07:27:26.401360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.111 [2024-07-25 07:27:26.522822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.111 [2024-07-25 07:27:26.522880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.111 [2024-07-25 07:27:26.522893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.111 [2024-07-25 07:27:26.522904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.111 [2024-07-25 07:27:26.522913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.111 [2024-07-25 07:27:26.522971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.111 [2024-07-25 07:27:26.523027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.111 [2024-07-25 07:27:26.523091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:54.111 [2024-07-25 07:27:26.523094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.369 [2024-07-25 07:27:26.681775] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.369 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.369 Malloc1 00:21:54.369 [2024-07-25 07:27:26.775026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.369 Malloc2 00:21:54.369 Malloc3 00:21:54.369 Malloc4 00:21:54.626 Malloc5 00:21:54.626 Malloc6 00:21:54.626 Malloc7 00:21:54.626 Malloc8 00:21:54.626 Malloc9 00:21:54.884 Malloc10 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2516428 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2516428 /var/tmp/bdevperf.sock 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2516428 ']' 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.884 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.885 { 00:21:54.885 "params": { 00:21:54.885 "name": "Nvme$subsystem", 00:21:54.885 "trtype": "$TEST_TRANSPORT", 00:21:54.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.885 "adrfam": "ipv4", 00:21:54.885 "trsvcid": "$NVMF_PORT", 00:21:54.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.885 "hdgst": ${hdgst:-false}, 00:21:54.885 "ddgst": ${ddgst:-false} 00:21:54.885 }, 00:21:54.885 "method": "bdev_nvme_attach_controller" 00:21:54.885 } 00:21:54.885 EOF 00:21:54.885 )") 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:54.885 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:54.886 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme1", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme2", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme3", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme4", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme5", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme6", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme7", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme8", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme9", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 },{ 00:21:54.886 "params": { 00:21:54.886 "name": "Nvme10", 00:21:54.886 "trtype": "tcp", 00:21:54.886 "traddr": "10.0.0.2", 00:21:54.886 "adrfam": "ipv4", 00:21:54.886 "trsvcid": "4420", 00:21:54.886 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:54.886 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:54.886 "hdgst": false, 00:21:54.886 "ddgst": false 00:21:54.886 }, 00:21:54.886 "method": "bdev_nvme_attach_controller" 00:21:54.886 }' 00:21:54.886 [2024-07-25 07:27:27.274962] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:21:54.886 [2024-07-25 07:27:27.275051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516428 ] 00:21:54.886 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.886 [2024-07-25 07:27:27.339313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.143 [2024-07-25 07:27:27.448831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.089 Running I/O for 10 seconds... 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.089 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:57.090 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:57.347 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:57.347 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.347 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.347 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.348 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.348 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.348 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.348 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:57.348 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:57.348 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2516274 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2516274 ']' 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2516274 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.621 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2516274 00:21:57.621 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:57.621 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:57.621 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2516274' 00:21:57.621 killing process with pid 2516274 00:21:57.621 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2516274 00:21:57.621 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2516274 00:21:57.621 [2024-07-25 07:27:30.009312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.621 [2024-07-25 07:27:30.009998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.010269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1af0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.011911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4610 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.011944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4610 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.011960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4610 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.011972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4610 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.013988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.014000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.014012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.014024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.014036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.014049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.622 [2024-07-25 07:27:30.014062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.014075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.014087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1fb0 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.015996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.016816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2470 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.623 [2024-07-25 07:27:30.018453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.018988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2950 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.019995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.624 [2024-07-25 07:27:30.020328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.020675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2e10 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.021998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.625 [2024-07-25 07:27:30.022257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.022542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c32f0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.023989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.024254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.626 [2024-07-25 07:27:30.026920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.026967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.626 [2024-07-25 07:27:30.027378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.626 [2024-07-25 07:27:30.027394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.027983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.027999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.627 [2024-07-25 07:27:30.028623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.627 [2024-07-25 07:27:30.028640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.028977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.028992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.628 [2024-07-25 07:27:30.029007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.029059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.628 [2024-07-25 07:27:30.029661] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1574bc0 was disconnected and freed. reset controller. 00:21:57.628 [2024-07-25 07:27:30.029775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.029798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.029820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.029834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.029848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.029861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.029875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.029889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.029906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fce00 is same with the state(5) to be set 00:21:57.628 [2024-07-25 07:27:30.029960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.029981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.029997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4610 is same with the state(5) to be set 00:21:57.628 [2024-07-25 07:27:30.030134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2060 is same with the state(5) to be set 00:21:57.628 [2024-07-25 07:27:30.030337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503e20 is same with the state(5) to be set 00:21:57.628 [2024-07-25 07:27:30.030511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf2e0 is same with the state(5) to be set 00:21:57.628 [2024-07-25 07:27:30.030691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.628 [2024-07-25 07:27:30.030794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.628 [2024-07-25 07:27:30.030806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e390 is same with the state(5) to be set 00:21:57.628 [2024-07-25 07:27:30.030849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.030871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.030886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.030899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.030913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.030930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.030945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.030958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.030971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d28d0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.031016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.031037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.031052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.031070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.031085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.031098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.031112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.629 [2024-07-25 07:27:30.031126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.031138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401f80 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.035020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:57.629 [2024-07-25 07:27:30.035078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b2060 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.038073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.629 [2024-07-25 07:27:30.038117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b2060 with addr=10.0.0.2, port=4420 00:21:57.629 [2024-07-25 07:27:30.038137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2060 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.038981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.039001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.039024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.039046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c37b0 is same with the state(5) to be set 00:21:57.629 [2024-07-25 07:27:30.039078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b2060 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.039171] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:57.629 [2024-07-25 07:27:30.039259] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:57.629 [2024-07-25 07:27:30.039338] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:57.629 [2024-07-25 07:27:30.039407] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:57.629 [2024-07-25 07:27:30.039481] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:57.629 [2024-07-25 07:27:30.039564] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:57.629 [2024-07-25 07:27:30.040039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:57.629 [2024-07-25 07:27:30.040066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:57.629 [2024-07-25 07:27:30.040085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:57.629 [2024-07-25 07:27:30.040120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fce00 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4610 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1503e20 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf2e0 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159e390 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d28d0 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1401f80 (9): Bad file descriptor 00:21:57.629 [2024-07-25 07:27:30.040815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.629 [2024-07-25 07:27:30.041625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.629 [2024-07-25 07:27:30.041894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.629 [2024-07-25 07:27:30.041912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.041931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.041945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.041961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.041981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.041998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.042589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.630 [2024-07-25 07:27:30.042606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.630 [2024-07-25 07:27:30.044796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c3c90 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.044830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c3c90 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.044845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c3c90 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.044858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c3c90 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.630 [2024-07-25 07:27:30.047580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.047893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4150 is same with the state(5) to be set 00:21:57.631 [2024-07-25 07:27:30.055471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.055978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.055993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.631 [2024-07-25 07:27:30.056420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.631 [2024-07-25 07:27:30.056434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.056719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.056737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfff40 is same with the state(5) to be set 00:21:57.632 [2024-07-25 07:27:30.056851] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cfff40 was disconnected and freed. reset controller. 00:21:57.632 [2024-07-25 07:27:30.057731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.057757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.057774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.057788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.057802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.057815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.057829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.057843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.057857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158c620 is same with the state(5) to be set 00:21:57.632 [2024-07-25 07:27:30.057909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.057940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.057962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.057976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.057991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.058010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.058025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.632 [2024-07-25 07:27:30.058039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.058052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584bd0 is same with the state(5) to be set 00:21:57.632 [2024-07-25 07:27:30.058082] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.632 [2024-07-25 07:27:30.059364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.059978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.632 [2024-07-25 07:27:30.059992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.632 [2024-07-25 07:27:30.060008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.060972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.060988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.633 [2024-07-25 07:27:30.061232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.633 [2024-07-25 07:27:30.061253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061476] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea7a10 was disconnected and freed. reset controller. 00:21:57.634 [2024-07-25 07:27:30.061642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:57.634 [2024-07-25 07:27:30.061750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.061966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.061983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.634 [2024-07-25 07:27:30.062661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.634 [2024-07-25 07:27:30.062675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.062982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.062998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.063291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.063305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.071919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.071977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.071994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.072450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.072467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b690 is same with the state(5) to be set 00:21:57.635 [2024-07-25 07:27:30.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.073863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.073889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.073905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.073922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.073937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.635 [2024-07-25 07:27:30.073953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.635 [2024-07-25 07:27:30.073968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.073984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.073999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.074981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.074998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.636 [2024-07-25 07:27:30.075198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.636 [2024-07-25 07:27:30.075214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.075859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.075874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1480e60 is same with the state(5) to be set 00:21:57.637 [2024-07-25 07:27:30.077134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.637 [2024-07-25 07:27:30.077727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.637 [2024-07-25 07:27:30.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.077973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.638 [2024-07-25 07:27:30.078969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.638 [2024-07-25 07:27:30.078985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.079000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.079016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.079031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.079063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.079080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.079094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.079125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.079141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.079156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.079171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482310 is same with the state(5) to be set 00:21:57.639 [2024-07-25 07:27:30.080442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.080981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.080997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.639 [2024-07-25 07:27:30.081283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.639 [2024-07-25 07:27:30.081298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.081981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.081995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.082470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.082485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cbdf0 is same with the state(5) to be set 00:21:57.640 [2024-07-25 07:27:30.083724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.640 [2024-07-25 07:27:30.083752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.640 [2024-07-25 07:27:30.083774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.083973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.083990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.084974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.084989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.641 [2024-07-25 07:27:30.085005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.641 [2024-07-25 07:27:30.085020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.085753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.085768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cd2a0 is same with the state(5) to be set 00:21:57.642 [2024-07-25 07:27:30.087009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.642 [2024-07-25 07:27:30.087518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.642 [2024-07-25 07:27:30.087535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.087973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.087987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.643 [2024-07-25 07:27:30.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.643 [2024-07-25 07:27:30.088610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.088968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.089016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.089030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.089045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce750 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.091527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:57.644 [2024-07-25 07:27:30.091565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.644 [2024-07-25 07:27:30.091877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.091908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1503e20 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.091926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503e20 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.091982] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.644 [2024-07-25 07:27:30.092005] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.644 [2024-07-25 07:27:30.092032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158c620 (9): Bad file descriptor 00:21:57.644 [2024-07-25 07:27:30.092058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584bd0 (9): Bad file descriptor 00:21:57.644 [2024-07-25 07:27:30.092093] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.644 [2024-07-25 07:27:30.092115] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.644 [2024-07-25 07:27:30.092136] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.644 [2024-07-25 07:27:30.092156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1503e20 (9): Bad file descriptor 00:21:57.644 [2024-07-25 07:27:30.092608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:57.644 [2024-07-25 07:27:30.092637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:57.644 [2024-07-25 07:27:30.092655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:57.644 [2024-07-25 07:27:30.092671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:57.644 [2024-07-25 07:27:30.092844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.092871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b2060 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.092888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2060 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.093027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.093053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d28d0 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.093075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d28d0 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.095011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:57.644 [2024-07-25 07:27:30.095041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:57.644 [2024-07-25 07:27:30.095204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.095231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159e390 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.095255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e390 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.095409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.095435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fce00 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.095450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fce00 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.095668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.095692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1401f80 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.095709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401f80 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.095828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.644 [2024-07-25 07:27:30.095853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf2e0 with addr=10.0.0.2, port=4420 00:21:57.644 [2024-07-25 07:27:30.095869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf2e0 is same with the state(5) to be set 00:21:57.644 [2024-07-25 07:27:30.095888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b2060 (9): Bad file descriptor 00:21:57.644 [2024-07-25 07:27:30.095907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d28d0 (9): Bad file descriptor 00:21:57.644 [2024-07-25 07:27:30.095925] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:57.644 [2024-07-25 07:27:30.095938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:57.644 [2024-07-25 07:27:30.095955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:57.644 [2024-07-25 07:27:30.096094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.644 [2024-07-25 07:27:30.096347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.644 [2024-07-25 07:27:30.096363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.096984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.096999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.645 [2024-07-25 07:27:30.097596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.645 [2024-07-25 07:27:30.097610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.097982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.097999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.098013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.098030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.098044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.098061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.098075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.098092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.098106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.098122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.646 [2024-07-25 07:27:30.098136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.646 [2024-07-25 07:27:30.098151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15736c0 is same with the state(5) to be set 00:21:57.646 [2024-07-25 07:27:30.098736] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15736c0 was disconnected and freed. reset controller. 00:21:57.646 [2024-07-25 07:27:30.098769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.646 [2024-07-25 07:27:30.098912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.646 [2024-07-25 07:27:30.098939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4610 with addr=10.0.0.2, port=4420 00:21:57.646 [2024-07-25 07:27:30.098955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4610 is same with the state(5) to be set 00:21:57.646 [2024-07-25 07:27:30.099129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.646 [2024-07-25 07:27:30.099159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158c620 with addr=10.0.0.2, port=4420 00:21:57.646 [2024-07-25 07:27:30.099176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158c620 is same with the state(5) to be set 00:21:57.646 [2024-07-25 07:27:30.099196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159e390 (9): Bad file descriptor 00:21:57.646 [2024-07-25 07:27:30.099215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fce00 (9): Bad file descriptor 00:21:57.646 [2024-07-25 07:27:30.099233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1401f80 (9): Bad file descriptor 00:21:57.646 [2024-07-25 07:27:30.099260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf2e0 (9): Bad file descriptor 00:21:57.646 [2024-07-25 07:27:30.099278] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:57.646 [2024-07-25 07:27:30.099291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:57.646 [2024-07-25 07:27:30.099305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:57.646 [2024-07-25 07:27:30.099324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:57.646 [2024-07-25 07:27:30.099338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:57.646 [2024-07-25 07:27:30.099351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.646 [2024-07-25 07:27:30.100593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.646 [2024-07-25 07:27:30.100617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.646 [2024-07-25 07:27:30.100631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:57.646 task offset: 26752 on job bdev=Nvme10n1 fails 00:21:57.646 00:21:57.646 Latency(us) 00:21:57.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme1n1 ended in about 0.95 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme1n1 : 0.95 135.03 8.44 67.52 0.00 312720.75 22039.51 257872.02 00:21:57.646 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme2n1 ended in about 0.95 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme2n1 : 0.95 134.56 8.41 67.28 0.00 307872.55 22524.97 274959.93 00:21:57.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme3n1 ended in about 0.95 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme3n1 : 0.95 134.09 8.38 67.05 0.00 303053.94 26408.58 298261.62 00:21:57.646 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme4n1 ended in about 0.96 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme4n1 : 0.96 200.45 12.53 66.82 0.00 223618.09 22524.97 256318.58 00:21:57.646 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme5n1 ended in about 0.96 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme5n1 : 0.96 133.18 8.32 66.59 0.00 293369.17 42331.40 278066.82 00:21:57.646 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme6n1 ended in about 0.96 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme6n1 : 0.96 199.09 12.44 66.36 0.00 216336.88 15825.73 254765.13 00:21:57.646 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme7n1 ended in about 0.93 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme7n1 : 0.93 137.10 8.57 68.55 0.00 272320.28 22913.33 281173.71 00:21:57.646 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme8n1 ended in about 0.97 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme8n1 : 0.97 198.84 12.43 66.28 0.00 207726.55 19223.89 229910.00 00:21:57.646 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme9n1 ended in about 0.97 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme9n1 : 0.97 131.31 8.21 65.66 0.00 274293.44 23204.60 310689.19 00:21:57.646 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.646 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:57.646 Verification LBA range: start 0x0 length 0x400 00:21:57.646 Nvme10n1 : 0.91 211.14 13.20 70.38 0.00 184949.48 7281.78 256318.58 00:21:57.646 =================================================================================================================== 00:21:57.646 Total : 1614.79 100.92 672.48 0.00 253571.01 7281.78 310689.19 00:21:57.646 [2024-07-25 07:27:30.130030] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:57.647 [2024-07-25 07:27:30.130195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4610 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.130230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158c620 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.130257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.130273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.130292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:57.647 [2024-07-25 07:27:30.130321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.130337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.130351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:57.647 [2024-07-25 07:27:30.130368] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.130382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.130395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:57.647 [2024-07-25 07:27:30.130413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.130428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.130442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:57.647 [2024-07-25 07:27:30.130543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.130566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.130579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.130591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.130848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.130886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1584bd0 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.130916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584bd0 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.130933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.130946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.130960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:57.647 [2024-07-25 07:27:30.130978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.130993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.131006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:57.647 [2024-07-25 07:27:30.131099] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.647 [2024-07-25 07:27:30.131124] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:57.647 [2024-07-25 07:27:30.131458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.131491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.131529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584bd0 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.131874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:57.647 [2024-07-25 07:27:30.131907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.647 [2024-07-25 07:27:30.131925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:57.647 [2024-07-25 07:27:30.131942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:57.647 [2024-07-25 07:27:30.131958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:57.647 [2024-07-25 07:27:30.132007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.132025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.132040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:57.647 [2024-07-25 07:27:30.132085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:57.647 [2024-07-25 07:27:30.132107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:57.647 [2024-07-25 07:27:30.132134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.132327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.132356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1503e20 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.132373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503e20 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.132502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.132527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d28d0 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.132544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d28d0 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.132700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.132731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b2060 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.132748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2060 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.132869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf2e0 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.132910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf2e0 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.133037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.133062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1401f80 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.133077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401f80 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.133221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.133253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fce00 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.133271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fce00 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.133438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.647 [2024-07-25 07:27:30.133463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159e390 with addr=10.0.0.2, port=4420 00:21:57.647 [2024-07-25 07:27:30.133479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e390 is same with the state(5) to be set 00:21:57.647 [2024-07-25 07:27:30.133498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1503e20 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d28d0 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b2060 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf2e0 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1401f80 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fce00 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159e390 (9): Bad file descriptor 00:21:57.647 [2024-07-25 07:27:30.133650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.133664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.133677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:57.647 [2024-07-25 07:27:30.133693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.133708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.133721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.647 [2024-07-25 07:27:30.133736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.133750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.133768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:57.647 [2024-07-25 07:27:30.133785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.133799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.133812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:57.647 [2024-07-25 07:27:30.133827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:57.647 [2024-07-25 07:27:30.133841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:57.647 [2024-07-25 07:27:30.133855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:57.647 [2024-07-25 07:27:30.133895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.133914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.133926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.133938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.133949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.647 [2024-07-25 07:27:30.133962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:57.648 [2024-07-25 07:27:30.133975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:57.648 [2024-07-25 07:27:30.133988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:57.648 [2024-07-25 07:27:30.134004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:57.648 [2024-07-25 07:27:30.134018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:57.648 [2024-07-25 07:27:30.134031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:57.648 [2024-07-25 07:27:30.134068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.648 [2024-07-25 07:27:30.134086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:58.212 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:58.212 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2516428 00:21:59.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2516428) - No such process 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.147 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.147 rmmod nvme_tcp 00:21:59.147 rmmod nvme_fabrics 00:21:59.147 rmmod nvme_keyring 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.406 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.306 00:22:01.306 real 0m7.635s 00:22:01.306 user 0m18.504s 00:22:01.306 sys 0m1.526s 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.306 ************************************ 00:22:01.306 END TEST nvmf_shutdown_tc3 00:22:01.306 ************************************ 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:01.306 00:22:01.306 real 0m28.184s 00:22:01.306 user 1m19.191s 00:22:01.306 sys 0m6.406s 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.306 ************************************ 00:22:01.306 END TEST nvmf_shutdown 00:22:01.306 ************************************ 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:01.306 00:22:01.306 real 10m32.156s 00:22:01.306 user 25m14.825s 00:22:01.306 sys 2m32.250s 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.306 07:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:01.306 ************************************ 00:22:01.306 END TEST nvmf_target_extra 00:22:01.306 ************************************ 00:22:01.306 07:27:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:01.306 07:27:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:01.306 07:27:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.306 07:27:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.306 ************************************ 00:22:01.306 START TEST nvmf_host 00:22:01.306 ************************************ 00:22:01.306 07:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:01.564 * Looking for test storage... 00:22:01.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.564 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.565 ************************************ 00:22:01.565 START TEST nvmf_multicontroller 00:22:01.565 ************************************ 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:01.565 * Looking for test storage... 00:22:01.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:01.565 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.566 07:27:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:03.466 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:03.466 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.466 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:03.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:03.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.467 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.725 07:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:22:03.725 00:22:03.725 --- 10.0.0.2 ping statistics --- 00:22:03.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.725 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:22:03.725 00:22:03.725 --- 10.0.0.1 ping statistics --- 00:22:03.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.725 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2518897 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2518897 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2518897 ']' 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.725 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.725 [2024-07-25 07:27:36.178570] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:03.725 [2024-07-25 07:27:36.178656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.725 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.725 [2024-07-25 07:27:36.249273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.983 [2024-07-25 07:27:36.368595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.983 [2024-07-25 07:27:36.368656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.983 [2024-07-25 07:27:36.368683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.983 [2024-07-25 07:27:36.368696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.983 [2024-07-25 07:27:36.368708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.983 [2024-07-25 07:27:36.368793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.983 [2024-07-25 07:27:36.368887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.983 [2024-07-25 07:27:36.368891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.983 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.983 [2024-07-25 07:27:36.509248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 Malloc0 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 [2024-07-25 07:27:36.570281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 [2024-07-25 07:27:36.578113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 Malloc1 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2519034 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2519034 /var/tmp/bdevperf.sock 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2519034 ']' 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.242 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.500 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.500 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:04.500 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:04.500 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.500 07:27:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.758 NVMe0n1 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.758 1 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.758 request: 00:22:04.758 { 00:22:04.758 "name": "NVMe0", 00:22:04.758 "trtype": "tcp", 00:22:04.758 "traddr": "10.0.0.2", 00:22:04.758 "adrfam": "ipv4", 00:22:04.758 "trsvcid": "4420", 00:22:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.758 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:04.758 "hostaddr": "10.0.0.2", 00:22:04.758 "hostsvcid": "60000", 00:22:04.758 "prchk_reftag": false, 00:22:04.758 "prchk_guard": false, 00:22:04.758 "hdgst": false, 00:22:04.758 "ddgst": false, 00:22:04.758 "method": "bdev_nvme_attach_controller", 00:22:04.758 "req_id": 1 00:22:04.758 } 00:22:04.758 Got JSON-RPC error response 00:22:04.758 response: 00:22:04.758 { 00:22:04.758 "code": -114, 00:22:04.758 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:04.758 } 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.758 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.759 request: 00:22:04.759 { 00:22:04.759 "name": "NVMe0", 00:22:04.759 "trtype": "tcp", 00:22:04.759 "traddr": "10.0.0.2", 00:22:04.759 "adrfam": "ipv4", 00:22:04.759 "trsvcid": "4420", 00:22:04.759 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.759 "hostaddr": "10.0.0.2", 00:22:04.759 "hostsvcid": "60000", 00:22:04.759 "prchk_reftag": false, 00:22:04.759 "prchk_guard": false, 00:22:04.759 "hdgst": false, 00:22:04.759 "ddgst": false, 00:22:04.759 "method": "bdev_nvme_attach_controller", 00:22:04.759 "req_id": 1 00:22:04.759 } 00:22:04.759 Got JSON-RPC error response 00:22:04.759 response: 00:22:04.759 { 00:22:04.759 "code": -114, 00:22:04.759 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:04.759 } 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.759 request: 00:22:04.759 { 00:22:04.759 "name": "NVMe0", 00:22:04.759 "trtype": "tcp", 00:22:04.759 "traddr": "10.0.0.2", 00:22:04.759 "adrfam": "ipv4", 00:22:04.759 "trsvcid": "4420", 00:22:04.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.759 "hostaddr": "10.0.0.2", 00:22:04.759 "hostsvcid": "60000", 00:22:04.759 "prchk_reftag": false, 00:22:04.759 "prchk_guard": false, 00:22:04.759 "hdgst": false, 00:22:04.759 "ddgst": false, 00:22:04.759 "multipath": "disable", 00:22:04.759 "method": "bdev_nvme_attach_controller", 00:22:04.759 "req_id": 1 00:22:04.759 } 00:22:04.759 Got JSON-RPC error response 00:22:04.759 response: 00:22:04.759 { 00:22:04.759 "code": -114, 00:22:04.759 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:04.759 } 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.759 request: 00:22:04.759 { 00:22:04.759 "name": "NVMe0", 00:22:04.759 "trtype": "tcp", 00:22:04.759 "traddr": "10.0.0.2", 00:22:04.759 "adrfam": "ipv4", 00:22:04.759 "trsvcid": "4420", 00:22:04.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.759 "hostaddr": "10.0.0.2", 00:22:04.759 "hostsvcid": "60000", 00:22:04.759 "prchk_reftag": false, 00:22:04.759 "prchk_guard": false, 00:22:04.759 "hdgst": false, 00:22:04.759 "ddgst": false, 00:22:04.759 "multipath": "failover", 00:22:04.759 "method": "bdev_nvme_attach_controller", 00:22:04.759 "req_id": 1 00:22:04.759 } 00:22:04.759 Got JSON-RPC error response 00:22:04.759 response: 00:22:04.759 { 00:22:04.759 "code": -114, 00:22:04.759 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:04.759 } 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.759 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.759 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.017 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:05.017 07:27:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:06.389 0 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2519034 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2519034 ']' 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2519034 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2519034 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2519034' 00:22:06.389 killing process with pid 2519034 00:22:06.389 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2519034 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2519034 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:06.390 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:06.390 [2024-07-25 07:27:36.684143] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:06.390 [2024-07-25 07:27:36.684238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519034 ] 00:22:06.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.390 [2024-07-25 07:27:36.743270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.390 [2024-07-25 07:27:36.852142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.390 [2024-07-25 07:27:37.412277] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 2dfba9ca-fcd6-4106-8967-2e73d60d27d5 already exists 00:22:06.390 [2024-07-25 07:27:37.412318] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:2dfba9ca-fcd6-4106-8967-2e73d60d27d5 alias for bdev NVMe1n1 00:22:06.390 [2024-07-25 07:27:37.412333] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:06.390 Running I/O for 1 seconds... 00:22:06.390 00:22:06.390 Latency(us) 00:22:06.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.390 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:06.390 NVMe0n1 : 1.00 18883.25 73.76 0.00 0.00 6767.88 5898.24 13107.20 00:22:06.390 =================================================================================================================== 00:22:06.390 Total : 18883.25 73.76 0.00 0.00 6767.88 5898.24 13107.20 00:22:06.390 Received shutdown signal, test time was about 1.000000 seconds 00:22:06.390 00:22:06.390 Latency(us) 00:22:06.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.390 =================================================================================================================== 00:22:06.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.390 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.390 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.647 rmmod nvme_tcp 00:22:06.647 rmmod nvme_fabrics 00:22:06.647 rmmod nvme_keyring 00:22:06.647 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.647 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:06.647 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:06.647 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2518897 ']' 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2518897 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2518897 ']' 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2518897 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2518897 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2518897' 00:22:06.648 killing process with pid 2518897 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2518897 00:22:06.648 07:27:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2518897 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.905 07:27:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.433 00:22:09.433 real 0m7.443s 00:22:09.433 user 0m11.732s 00:22:09.433 sys 0m2.255s 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:09.433 ************************************ 00:22:09.433 END TEST nvmf_multicontroller 00:22:09.433 ************************************ 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.433 ************************************ 00:22:09.433 START TEST nvmf_aer 00:22:09.433 ************************************ 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:09.433 * Looking for test storage... 00:22:09.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.433 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.434 07:27:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:10.810 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.810 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:10.811 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:10.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:10.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.811 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.067 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:11.068 00:22:11.068 --- 10.0.0.2 ping statistics --- 00:22:11.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.068 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:11.068 00:22:11.068 --- 10.0.0.1 ping statistics --- 00:22:11.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.068 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2521245 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2521245 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2521245 ']' 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.068 07:27:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.068 [2024-07-25 07:27:43.538507] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:11.068 [2024-07-25 07:27:43.538598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.068 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.325 [2024-07-25 07:27:43.607993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.325 [2024-07-25 07:27:43.725300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.325 [2024-07-25 07:27:43.725358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.325 [2024-07-25 07:27:43.725375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.325 [2024-07-25 07:27:43.725389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.325 [2024-07-25 07:27:43.725400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.325 [2024-07-25 07:27:43.725492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.325 [2024-07-25 07:27:43.725550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.325 [2024-07-25 07:27:43.725621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.325 [2024-07-25 07:27:43.725624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.257 [2024-07-25 07:27:44.532942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.257 Malloc0 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.257 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.258 [2024-07-25 07:27:44.583984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.258 [ 00:22:12.258 { 00:22:12.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.258 "subtype": "Discovery", 00:22:12.258 "listen_addresses": [], 00:22:12.258 "allow_any_host": true, 00:22:12.258 "hosts": [] 00:22:12.258 }, 00:22:12.258 { 00:22:12.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.258 "subtype": "NVMe", 00:22:12.258 "listen_addresses": [ 00:22:12.258 { 00:22:12.258 "trtype": "TCP", 00:22:12.258 "adrfam": "IPv4", 00:22:12.258 "traddr": "10.0.0.2", 00:22:12.258 "trsvcid": "4420" 00:22:12.258 } 00:22:12.258 ], 00:22:12.258 "allow_any_host": true, 00:22:12.258 "hosts": [], 00:22:12.258 "serial_number": "SPDK00000000000001", 00:22:12.258 "model_number": "SPDK bdev Controller", 00:22:12.258 "max_namespaces": 2, 00:22:12.258 "min_cntlid": 1, 00:22:12.258 "max_cntlid": 65519, 00:22:12.258 "namespaces": [ 00:22:12.258 { 00:22:12.258 "nsid": 1, 00:22:12.258 "bdev_name": "Malloc0", 00:22:12.258 "name": "Malloc0", 00:22:12.258 "nguid": "58E1EC799F9744208AAC796BA863E903", 00:22:12.258 "uuid": "58e1ec79-9f97-4420-8aac-796ba863e903" 00:22:12.258 } 00:22:12.258 ] 00:22:12.258 } 00:22:12.258 ] 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2521401 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:12.258 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:12.258 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 Malloc1 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 [ 00:22:12.516 { 00:22:12.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.516 "subtype": "Discovery", 00:22:12.516 "listen_addresses": [], 00:22:12.516 "allow_any_host": true, 00:22:12.516 "hosts": [] 00:22:12.516 }, 00:22:12.516 { 00:22:12.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.516 "subtype": "NVMe", 00:22:12.516 "listen_addresses": [ 00:22:12.516 { 00:22:12.516 "trtype": "TCP", 00:22:12.516 "adrfam": "IPv4", 00:22:12.516 "traddr": "10.0.0.2", 00:22:12.516 "trsvcid": "4420" 00:22:12.516 } 00:22:12.516 ], 00:22:12.516 "allow_any_host": true, 00:22:12.516 "hosts": [], 00:22:12.516 "serial_number": "SPDK00000000000001", 00:22:12.516 "model_number": "SPDK bdev Controller", 00:22:12.516 "max_namespaces": 2, 00:22:12.516 "min_cntlid": 1, 00:22:12.516 "max_cntlid": 65519, 00:22:12.516 "namespaces": [ 00:22:12.516 { 00:22:12.516 "nsid": 1, 00:22:12.516 "bdev_name": "Malloc0", 00:22:12.516 "name": "Malloc0", 00:22:12.516 "nguid": "58E1EC799F9744208AAC796BA863E903", 00:22:12.516 "uuid": "58e1ec79-9f97-4420-8aac-796ba863e903" 00:22:12.516 }, 00:22:12.516 { 00:22:12.516 "nsid": 2, 00:22:12.516 "bdev_name": "Malloc1", 00:22:12.516 "name": "Malloc1", 00:22:12.516 "nguid": "B0E3B69EB19D43968625EE4977D52129", 00:22:12.516 "uuid": "b0e3b69e-b19d-4396-8625-ee4977d52129" 00:22:12.516 } 00:22:12.516 ] 00:22:12.516 } 00:22:12.516 ] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2521401 00:22:12.516 Asynchronous Event Request test 00:22:12.516 Attaching to 10.0.0.2 00:22:12.516 Attached to 10.0.0.2 00:22:12.516 Registering asynchronous event callbacks... 00:22:12.516 Starting namespace attribute notice tests for all controllers... 00:22:12.516 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:12.516 aer_cb - Changed Namespace 00:22:12.516 Cleaning up... 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.516 rmmod nvme_tcp 00:22:12.516 rmmod nvme_fabrics 00:22:12.516 rmmod nvme_keyring 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2521245 ']' 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2521245 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2521245 ']' 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2521245 00:22:12.516 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:12.517 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.517 07:27:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521245 00:22:12.517 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:12.517 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:12.517 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521245' 00:22:12.517 killing process with pid 2521245 00:22:12.517 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2521245 00:22:12.517 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2521245 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.774 07:27:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.302 00:22:15.302 real 0m5.913s 00:22:15.302 user 0m6.861s 00:22:15.302 sys 0m1.871s 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:15.302 ************************************ 00:22:15.302 END TEST nvmf_aer 00:22:15.302 ************************************ 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.302 ************************************ 00:22:15.302 START TEST nvmf_async_init 00:22:15.302 ************************************ 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:15.302 * Looking for test storage... 00:22:15.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=36e511ea13cf4e9a9ce5745958cf6ce1 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.302 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.303 07:27:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.201 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:17.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:17.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:17.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:17.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:22:17.202 00:22:17.202 --- 10.0.0.2 ping statistics --- 00:22:17.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.202 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:17.202 00:22:17.202 --- 10.0.0.1 ping statistics --- 00:22:17.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.202 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:17.202 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2523333 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2523333 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2523333 ']' 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.203 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.203 [2024-07-25 07:27:49.627112] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:17.203 [2024-07-25 07:27:49.627197] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.203 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.203 [2024-07-25 07:27:49.690957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.460 [2024-07-25 07:27:49.806799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.460 [2024-07-25 07:27:49.806853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.460 [2024-07-25 07:27:49.806866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.460 [2024-07-25 07:27:49.806878] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.460 [2024-07-25 07:27:49.806887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.460 [2024-07-25 07:27:49.806914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.460 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 [2024-07-25 07:27:49.943827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 null0 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 36e511ea13cf4e9a9ce5745958cf6ce1 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 [2024-07-25 07:27:49.984090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.461 07:27:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.718 nvme0n1 00:22:17.718 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.718 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:17.718 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.718 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.718 [ 00:22:17.718 { 00:22:17.718 "name": "nvme0n1", 00:22:17.718 "aliases": [ 00:22:17.718 "36e511ea-13cf-4e9a-9ce5-745958cf6ce1" 00:22:17.718 ], 00:22:17.718 "product_name": "NVMe disk", 00:22:17.718 "block_size": 512, 00:22:17.718 "num_blocks": 2097152, 00:22:17.718 "uuid": "36e511ea-13cf-4e9a-9ce5-745958cf6ce1", 00:22:17.718 "assigned_rate_limits": { 00:22:17.718 "rw_ios_per_sec": 0, 00:22:17.718 "rw_mbytes_per_sec": 0, 00:22:17.718 "r_mbytes_per_sec": 0, 00:22:17.718 "w_mbytes_per_sec": 0 00:22:17.718 }, 00:22:17.718 "claimed": false, 00:22:17.718 "zoned": false, 00:22:17.718 "supported_io_types": { 00:22:17.718 "read": true, 00:22:17.718 "write": true, 00:22:17.718 "unmap": false, 00:22:17.718 "flush": true, 00:22:17.718 "reset": true, 00:22:17.718 "nvme_admin": true, 00:22:17.718 "nvme_io": true, 00:22:17.718 "nvme_io_md": false, 00:22:17.718 "write_zeroes": true, 00:22:17.718 "zcopy": false, 00:22:17.718 "get_zone_info": false, 00:22:17.718 "zone_management": false, 00:22:17.718 "zone_append": false, 00:22:17.718 "compare": true, 00:22:17.718 "compare_and_write": true, 00:22:17.718 "abort": true, 00:22:17.718 "seek_hole": false, 00:22:17.718 "seek_data": false, 00:22:17.718 "copy": true, 00:22:17.718 "nvme_iov_md": false 00:22:17.718 }, 00:22:17.718 "memory_domains": [ 00:22:17.718 { 00:22:17.718 "dma_device_id": "system", 00:22:17.718 "dma_device_type": 1 00:22:17.718 } 00:22:17.718 ], 00:22:17.718 "driver_specific": { 00:22:17.719 "nvme": [ 00:22:17.719 { 00:22:17.719 "trid": { 00:22:17.719 "trtype": "TCP", 00:22:17.719 "adrfam": "IPv4", 00:22:17.719 "traddr": "10.0.0.2", 00:22:17.719 "trsvcid": "4420", 00:22:17.719 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:17.719 }, 00:22:17.719 "ctrlr_data": { 00:22:17.719 "cntlid": 1, 00:22:17.719 "vendor_id": "0x8086", 00:22:17.719 "model_number": "SPDK bdev Controller", 00:22:17.719 "serial_number": "00000000000000000000", 00:22:17.719 "firmware_revision": "24.09", 00:22:17.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.719 "oacs": { 00:22:17.719 "security": 0, 00:22:17.719 "format": 0, 00:22:17.719 "firmware": 0, 00:22:17.719 "ns_manage": 0 00:22:17.719 }, 00:22:17.719 "multi_ctrlr": true, 00:22:17.719 "ana_reporting": false 00:22:17.719 }, 00:22:17.719 "vs": { 00:22:17.719 "nvme_version": "1.3" 00:22:17.719 }, 00:22:17.719 "ns_data": { 00:22:17.719 "id": 1, 00:22:17.719 "can_share": true 00:22:17.719 } 00:22:17.719 } 00:22:17.719 ], 00:22:17.719 "mp_policy": "active_passive" 00:22:17.719 } 00:22:17.719 } 00:22:17.719 ] 00:22:17.719 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.719 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:17.719 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.719 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.719 [2024-07-25 07:27:50.236912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:17.719 [2024-07-25 07:27:50.236995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7ce0 (9): Bad file descriptor 00:22:17.977 [2024-07-25 07:27:50.379402] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 [ 00:22:17.977 { 00:22:17.977 "name": "nvme0n1", 00:22:17.977 "aliases": [ 00:22:17.977 "36e511ea-13cf-4e9a-9ce5-745958cf6ce1" 00:22:17.977 ], 00:22:17.977 "product_name": "NVMe disk", 00:22:17.977 "block_size": 512, 00:22:17.977 "num_blocks": 2097152, 00:22:17.977 "uuid": "36e511ea-13cf-4e9a-9ce5-745958cf6ce1", 00:22:17.977 "assigned_rate_limits": { 00:22:17.977 "rw_ios_per_sec": 0, 00:22:17.977 "rw_mbytes_per_sec": 0, 00:22:17.977 "r_mbytes_per_sec": 0, 00:22:17.977 "w_mbytes_per_sec": 0 00:22:17.977 }, 00:22:17.977 "claimed": false, 00:22:17.977 "zoned": false, 00:22:17.977 "supported_io_types": { 00:22:17.977 "read": true, 00:22:17.977 "write": true, 00:22:17.977 "unmap": false, 00:22:17.977 "flush": true, 00:22:17.977 "reset": true, 00:22:17.977 "nvme_admin": true, 00:22:17.977 "nvme_io": true, 00:22:17.977 "nvme_io_md": false, 00:22:17.977 "write_zeroes": true, 00:22:17.977 "zcopy": false, 00:22:17.977 "get_zone_info": false, 00:22:17.977 "zone_management": false, 00:22:17.977 "zone_append": false, 00:22:17.977 "compare": true, 00:22:17.977 "compare_and_write": true, 00:22:17.977 "abort": true, 00:22:17.977 "seek_hole": false, 00:22:17.977 "seek_data": false, 00:22:17.977 "copy": true, 00:22:17.977 "nvme_iov_md": false 00:22:17.977 }, 00:22:17.977 "memory_domains": [ 00:22:17.977 { 00:22:17.977 "dma_device_id": "system", 00:22:17.977 "dma_device_type": 1 00:22:17.977 } 00:22:17.977 ], 00:22:17.977 "driver_specific": { 00:22:17.977 "nvme": [ 00:22:17.977 { 00:22:17.977 "trid": { 00:22:17.977 "trtype": "TCP", 00:22:17.977 "adrfam": "IPv4", 00:22:17.977 "traddr": "10.0.0.2", 00:22:17.977 "trsvcid": "4420", 00:22:17.977 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:17.977 }, 00:22:17.977 "ctrlr_data": { 00:22:17.977 "cntlid": 2, 00:22:17.977 "vendor_id": "0x8086", 00:22:17.977 "model_number": "SPDK bdev Controller", 00:22:17.977 "serial_number": "00000000000000000000", 00:22:17.977 "firmware_revision": "24.09", 00:22:17.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.977 "oacs": { 00:22:17.977 "security": 0, 00:22:17.977 "format": 0, 00:22:17.977 "firmware": 0, 00:22:17.977 "ns_manage": 0 00:22:17.977 }, 00:22:17.977 "multi_ctrlr": true, 00:22:17.977 "ana_reporting": false 00:22:17.977 }, 00:22:17.977 "vs": { 00:22:17.977 "nvme_version": "1.3" 00:22:17.977 }, 00:22:17.977 "ns_data": { 00:22:17.977 "id": 1, 00:22:17.977 "can_share": true 00:22:17.977 } 00:22:17.977 } 00:22:17.977 ], 00:22:17.977 "mp_policy": "active_passive" 00:22:17.977 } 00:22:17.977 } 00:22:17.977 ] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zil28zxNzT 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zil28zxNzT 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 [2024-07-25 07:27:50.429798] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.977 [2024-07-25 07:27:50.429974] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zil28zxNzT 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 [2024-07-25 07:27:50.437796] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zil28zxNzT 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.977 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 [2024-07-25 07:27:50.445812] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.977 [2024-07-25 07:27:50.445873] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:18.235 nvme0n1 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:18.235 [ 00:22:18.235 { 00:22:18.235 "name": "nvme0n1", 00:22:18.235 "aliases": [ 00:22:18.235 "36e511ea-13cf-4e9a-9ce5-745958cf6ce1" 00:22:18.235 ], 00:22:18.235 "product_name": "NVMe disk", 00:22:18.235 "block_size": 512, 00:22:18.235 "num_blocks": 2097152, 00:22:18.235 "uuid": "36e511ea-13cf-4e9a-9ce5-745958cf6ce1", 00:22:18.235 "assigned_rate_limits": { 00:22:18.235 "rw_ios_per_sec": 0, 00:22:18.235 "rw_mbytes_per_sec": 0, 00:22:18.235 "r_mbytes_per_sec": 0, 00:22:18.235 "w_mbytes_per_sec": 0 00:22:18.235 }, 00:22:18.235 "claimed": false, 00:22:18.235 "zoned": false, 00:22:18.235 "supported_io_types": { 00:22:18.235 "read": true, 00:22:18.235 "write": true, 00:22:18.235 "unmap": false, 00:22:18.235 "flush": true, 00:22:18.235 "reset": true, 00:22:18.235 "nvme_admin": true, 00:22:18.235 "nvme_io": true, 00:22:18.235 "nvme_io_md": false, 00:22:18.235 "write_zeroes": true, 00:22:18.235 "zcopy": false, 00:22:18.235 "get_zone_info": false, 00:22:18.235 "zone_management": false, 00:22:18.235 "zone_append": false, 00:22:18.235 "compare": true, 00:22:18.235 "compare_and_write": true, 00:22:18.235 "abort": true, 00:22:18.235 "seek_hole": false, 00:22:18.235 "seek_data": false, 00:22:18.235 "copy": true, 00:22:18.235 "nvme_iov_md": false 00:22:18.235 }, 00:22:18.235 "memory_domains": [ 00:22:18.235 { 00:22:18.235 "dma_device_id": "system", 00:22:18.235 "dma_device_type": 1 00:22:18.235 } 00:22:18.235 ], 00:22:18.235 "driver_specific": { 00:22:18.235 "nvme": [ 00:22:18.235 { 00:22:18.235 "trid": { 00:22:18.235 "trtype": "TCP", 00:22:18.235 "adrfam": "IPv4", 00:22:18.235 "traddr": "10.0.0.2", 00:22:18.235 "trsvcid": "4421", 00:22:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:18.235 }, 00:22:18.235 "ctrlr_data": { 00:22:18.235 "cntlid": 3, 00:22:18.235 "vendor_id": "0x8086", 00:22:18.235 "model_number": "SPDK bdev Controller", 00:22:18.235 "serial_number": "00000000000000000000", 00:22:18.235 "firmware_revision": "24.09", 00:22:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.235 "oacs": { 00:22:18.235 "security": 0, 00:22:18.235 "format": 0, 00:22:18.235 "firmware": 0, 00:22:18.235 "ns_manage": 0 00:22:18.235 }, 00:22:18.235 "multi_ctrlr": true, 00:22:18.235 "ana_reporting": false 00:22:18.235 }, 00:22:18.235 "vs": { 00:22:18.235 "nvme_version": "1.3" 00:22:18.235 }, 00:22:18.235 "ns_data": { 00:22:18.235 "id": 1, 00:22:18.235 "can_share": true 00:22:18.235 } 00:22:18.235 } 00:22:18.235 ], 00:22:18.235 "mp_policy": "active_passive" 00:22:18.235 } 00:22:18.235 } 00:22:18.235 ] 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zil28zxNzT 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.235 rmmod nvme_tcp 00:22:18.235 rmmod nvme_fabrics 00:22:18.235 rmmod nvme_keyring 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2523333 ']' 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2523333 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2523333 ']' 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2523333 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2523333 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2523333' 00:22:18.235 killing process with pid 2523333 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2523333 00:22:18.235 [2024-07-25 07:27:50.632401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.235 [2024-07-25 07:27:50.632438] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.235 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2523333 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.494 07:27:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.475 00:22:20.475 real 0m5.569s 00:22:20.475 user 0m2.115s 00:22:20.475 sys 0m1.840s 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.475 ************************************ 00:22:20.475 END TEST nvmf_async_init 00:22:20.475 ************************************ 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.475 ************************************ 00:22:20.475 START TEST dma 00:22:20.475 ************************************ 00:22:20.475 07:27:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:20.734 * Looking for test storage... 00:22:20.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:20.734 00:22:20.734 real 0m0.075s 00:22:20.734 user 0m0.037s 00:22:20.734 sys 0m0.043s 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:20.734 ************************************ 00:22:20.734 END TEST dma 00:22:20.734 ************************************ 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.734 ************************************ 00:22:20.734 START TEST nvmf_identify 00:22:20.734 ************************************ 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:20.734 * Looking for test storage... 00:22:20.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.734 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.735 07:27:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.632 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.632 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:22.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:22.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:22.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:22.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:22:22.633 00:22:22.633 --- 10.0.0.2 ping statistics --- 00:22:22.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.633 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:22.633 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:22:22.891 00:22:22.891 --- 10.0.0.1 ping statistics --- 00:22:22.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.891 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2525463 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2525463 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2525463 ']' 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.891 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.891 [2024-07-25 07:27:55.236395] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:22.891 [2024-07-25 07:27:55.236483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.891 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.891 [2024-07-25 07:27:55.299408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.891 [2024-07-25 07:27:55.408850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.891 [2024-07-25 07:27:55.408900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.891 [2024-07-25 07:27:55.408924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.891 [2024-07-25 07:27:55.408935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.891 [2024-07-25 07:27:55.408944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.891 [2024-07-25 07:27:55.409028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.891 [2024-07-25 07:27:55.409094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.891 [2024-07-25 07:27:55.409160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.891 [2024-07-25 07:27:55.409163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.149 [2024-07-25 07:27:55.550504] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.149 Malloc0 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:23.149 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.150 [2024-07-25 07:27:55.626117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.150 [ 00:22:23.150 { 00:22:23.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.150 "subtype": "Discovery", 00:22:23.150 "listen_addresses": [ 00:22:23.150 { 00:22:23.150 "trtype": "TCP", 00:22:23.150 "adrfam": "IPv4", 00:22:23.150 "traddr": "10.0.0.2", 00:22:23.150 "trsvcid": "4420" 00:22:23.150 } 00:22:23.150 ], 00:22:23.150 "allow_any_host": true, 00:22:23.150 "hosts": [] 00:22:23.150 }, 00:22:23.150 { 00:22:23.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.150 "subtype": "NVMe", 00:22:23.150 "listen_addresses": [ 00:22:23.150 { 00:22:23.150 "trtype": "TCP", 00:22:23.150 "adrfam": "IPv4", 00:22:23.150 "traddr": "10.0.0.2", 00:22:23.150 "trsvcid": "4420" 00:22:23.150 } 00:22:23.150 ], 00:22:23.150 "allow_any_host": true, 00:22:23.150 "hosts": [], 00:22:23.150 "serial_number": "SPDK00000000000001", 00:22:23.150 "model_number": "SPDK bdev Controller", 00:22:23.150 "max_namespaces": 32, 00:22:23.150 "min_cntlid": 1, 00:22:23.150 "max_cntlid": 65519, 00:22:23.150 "namespaces": [ 00:22:23.150 { 00:22:23.150 "nsid": 1, 00:22:23.150 "bdev_name": "Malloc0", 00:22:23.150 "name": "Malloc0", 00:22:23.150 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:23.150 "eui64": "ABCDEF0123456789", 00:22:23.150 "uuid": "e51dfe5c-42d2-4df3-9a42-94a617342b56" 00:22:23.150 } 00:22:23.150 ] 00:22:23.150 } 00:22:23.150 ] 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.150 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:23.150 [2024-07-25 07:27:55.667726] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:23.150 [2024-07-25 07:27:55.667775] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525607 ] 00:22:23.150 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.410 [2024-07-25 07:27:55.701727] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:23.410 [2024-07-25 07:27:55.701800] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.410 [2024-07-25 07:27:55.701811] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.410 [2024-07-25 07:27:55.701829] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.410 [2024-07-25 07:27:55.701844] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.410 [2024-07-25 07:27:55.705296] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:23.410 [2024-07-25 07:27:55.705347] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11b2540 0 00:22:23.410 [2024-07-25 07:27:55.710262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.410 [2024-07-25 07:27:55.710290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.410 [2024-07-25 07:27:55.710301] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.410 [2024-07-25 07:27:55.710308] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.410 [2024-07-25 07:27:55.710362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.710375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.710383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.410 [2024-07-25 07:27:55.710404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.410 [2024-07-25 07:27:55.710437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.410 [2024-07-25 07:27:55.719255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.410 [2024-07-25 07:27:55.719273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.410 [2024-07-25 07:27:55.719281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.410 [2024-07-25 07:27:55.719318] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.410 [2024-07-25 07:27:55.719329] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:23.410 [2024-07-25 07:27:55.719340] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:23.410 [2024-07-25 07:27:55.719364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.410 [2024-07-25 07:27:55.719392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.410 [2024-07-25 07:27:55.719416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.410 [2024-07-25 07:27:55.719561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.410 [2024-07-25 07:27:55.719573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.410 [2024-07-25 07:27:55.719580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.410 [2024-07-25 07:27:55.719602] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:23.410 [2024-07-25 07:27:55.719615] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:23.410 [2024-07-25 07:27:55.719627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.410 [2024-07-25 07:27:55.719653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.410 [2024-07-25 07:27:55.719674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.410 [2024-07-25 07:27:55.719812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.410 [2024-07-25 07:27:55.719827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.410 [2024-07-25 07:27:55.719834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.410 [2024-07-25 07:27:55.719851] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:23.410 [2024-07-25 07:27:55.719865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.410 [2024-07-25 07:27:55.719878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.719892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.410 [2024-07-25 07:27:55.719903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.410 [2024-07-25 07:27:55.719924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.410 [2024-07-25 07:27:55.720048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.410 [2024-07-25 07:27:55.720061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.410 [2024-07-25 07:27:55.720068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.720075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.410 [2024-07-25 07:27:55.720085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.410 [2024-07-25 07:27:55.720101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.720110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.720117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.410 [2024-07-25 07:27:55.720127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.410 [2024-07-25 07:27:55.720148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.410 [2024-07-25 07:27:55.720289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.410 [2024-07-25 07:27:55.720304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.410 [2024-07-25 07:27:55.720312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.720319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.410 [2024-07-25 07:27:55.720328] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:23.410 [2024-07-25 07:27:55.720337] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:23.410 [2024-07-25 07:27:55.720350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.410 [2024-07-25 07:27:55.720469] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:23.410 [2024-07-25 07:27:55.720477] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.410 [2024-07-25 07:27:55.720494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.720502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.410 [2024-07-25 07:27:55.720509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.410 [2024-07-25 07:27:55.720520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.410 [2024-07-25 07:27:55.720541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.410 [2024-07-25 07:27:55.720674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.411 [2024-07-25 07:27:55.720689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.411 [2024-07-25 07:27:55.720696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.720703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.411 [2024-07-25 07:27:55.720712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.411 [2024-07-25 07:27:55.720728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.720737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.720744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.720755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.411 [2024-07-25 07:27:55.720780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.411 [2024-07-25 07:27:55.720915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.411 [2024-07-25 07:27:55.720930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.411 [2024-07-25 07:27:55.720938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.720945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.411 [2024-07-25 07:27:55.720953] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.411 [2024-07-25 07:27:55.720962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:23.411 [2024-07-25 07:27:55.720975] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:23.411 [2024-07-25 07:27:55.720989] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.411 [2024-07-25 07:27:55.721009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.411 [2024-07-25 07:27:55.721050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.411 [2024-07-25 07:27:55.721232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.411 [2024-07-25 07:27:55.721255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.411 [2024-07-25 07:27:55.721264] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721272] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b2540): datao=0, datal=4096, cccid=0 00:22:23.411 [2024-07-25 07:27:55.721280] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12123c0) on tqpair(0x11b2540): expected_datao=0, payload_size=4096 00:22:23.411 [2024-07-25 07:27:55.721296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721308] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.411 [2024-07-25 07:27:55.721340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.411 [2024-07-25 07:27:55.721347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.411 [2024-07-25 07:27:55.721375] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:23.411 [2024-07-25 07:27:55.721385] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:23.411 [2024-07-25 07:27:55.721393] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:23.411 [2024-07-25 07:27:55.721402] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:23.411 [2024-07-25 07:27:55.721411] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:23.411 [2024-07-25 07:27:55.721419] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:23.411 [2024-07-25 07:27:55.721435] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.411 [2024-07-25 07:27:55.721456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.411 [2024-07-25 07:27:55.721506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.411 [2024-07-25 07:27:55.721642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.411 [2024-07-25 07:27:55.721654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.411 [2024-07-25 07:27:55.721662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.411 [2024-07-25 07:27:55.721684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.411 [2024-07-25 07:27:55.721719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.411 [2024-07-25 07:27:55.721751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.411 [2024-07-25 07:27:55.721784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.411 [2024-07-25 07:27:55.721816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.411 [2024-07-25 07:27:55.721835] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.411 [2024-07-25 07:27:55.721849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.721856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.721867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.411 [2024-07-25 07:27:55.721890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12123c0, cid 0, qid 0 00:22:23.411 [2024-07-25 07:27:55.721901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212540, cid 1, qid 0 00:22:23.411 [2024-07-25 07:27:55.721909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12126c0, cid 2, qid 0 00:22:23.411 [2024-07-25 07:27:55.721921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.411 [2024-07-25 07:27:55.721929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12129c0, cid 4, qid 0 00:22:23.411 [2024-07-25 07:27:55.722085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.411 [2024-07-25 07:27:55.722101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.411 [2024-07-25 07:27:55.722108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.722115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12129c0) on tqpair=0x11b2540 00:22:23.411 [2024-07-25 07:27:55.722125] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:23.411 [2024-07-25 07:27:55.722134] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:23.411 [2024-07-25 07:27:55.722152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.722162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b2540) 00:22:23.411 [2024-07-25 07:27:55.722173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.411 [2024-07-25 07:27:55.722195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12129c0, cid 4, qid 0 00:22:23.411 [2024-07-25 07:27:55.722353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.411 [2024-07-25 07:27:55.722369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.411 [2024-07-25 07:27:55.722376] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.722383] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b2540): datao=0, datal=4096, cccid=4 00:22:23.411 [2024-07-25 07:27:55.722391] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12129c0) on tqpair(0x11b2540): expected_datao=0, payload_size=4096 00:22:23.411 [2024-07-25 07:27:55.722398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.722416] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.722425] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.767252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.411 [2024-07-25 07:27:55.767272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.411 [2024-07-25 07:27:55.767280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.411 [2024-07-25 07:27:55.767287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12129c0) on tqpair=0x11b2540 00:22:23.412 [2024-07-25 07:27:55.767309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:23.412 [2024-07-25 07:27:55.767355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b2540) 00:22:23.412 [2024-07-25 07:27:55.767378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.412 [2024-07-25 07:27:55.767390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11b2540) 00:22:23.412 [2024-07-25 07:27:55.767414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.412 [2024-07-25 07:27:55.767443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12129c0, cid 4, qid 0 00:22:23.412 [2024-07-25 07:27:55.767456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212b40, cid 5, qid 0 00:22:23.412 [2024-07-25 07:27:55.767619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.412 [2024-07-25 07:27:55.767633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.412 [2024-07-25 07:27:55.767640] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767647] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b2540): datao=0, datal=1024, cccid=4 00:22:23.412 [2024-07-25 07:27:55.767655] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12129c0) on tqpair(0x11b2540): expected_datao=0, payload_size=1024 00:22:23.412 [2024-07-25 07:27:55.767662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767673] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767681] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.412 [2024-07-25 07:27:55.767699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.412 [2024-07-25 07:27:55.767706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.767713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212b40) on tqpair=0x11b2540 00:22:23.412 [2024-07-25 07:27:55.808363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.412 [2024-07-25 07:27:55.808382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.412 [2024-07-25 07:27:55.808389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12129c0) on tqpair=0x11b2540 00:22:23.412 [2024-07-25 07:27:55.808416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b2540) 00:22:23.412 [2024-07-25 07:27:55.808437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.412 [2024-07-25 07:27:55.808467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12129c0, cid 4, qid 0 00:22:23.412 [2024-07-25 07:27:55.808622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.412 [2024-07-25 07:27:55.808637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.412 [2024-07-25 07:27:55.808644] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808651] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b2540): datao=0, datal=3072, cccid=4 00:22:23.412 [2024-07-25 07:27:55.808659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12129c0) on tqpair(0x11b2540): expected_datao=0, payload_size=3072 00:22:23.412 [2024-07-25 07:27:55.808666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808677] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808685] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.412 [2024-07-25 07:27:55.808724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.412 [2024-07-25 07:27:55.808731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12129c0) on tqpair=0x11b2540 00:22:23.412 [2024-07-25 07:27:55.808753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b2540) 00:22:23.412 [2024-07-25 07:27:55.808773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.412 [2024-07-25 07:27:55.808801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12129c0, cid 4, qid 0 00:22:23.412 [2024-07-25 07:27:55.808959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.412 [2024-07-25 07:27:55.808977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.412 [2024-07-25 07:27:55.808985] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.808992] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b2540): datao=0, datal=8, cccid=4 00:22:23.412 [2024-07-25 07:27:55.809000] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12129c0) on tqpair(0x11b2540): expected_datao=0, payload_size=8 00:22:23.412 [2024-07-25 07:27:55.809007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.809017] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.809025] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.849377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.412 [2024-07-25 07:27:55.849396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.412 [2024-07-25 07:27:55.849404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.412 [2024-07-25 07:27:55.849411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12129c0) on tqpair=0x11b2540 00:22:23.412 ===================================================== 00:22:23.412 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:23.412 ===================================================== 00:22:23.412 Controller Capabilities/Features 00:22:23.412 ================================ 00:22:23.412 Vendor ID: 0000 00:22:23.412 Subsystem Vendor ID: 0000 00:22:23.412 Serial Number: .................... 00:22:23.412 Model Number: ........................................ 00:22:23.412 Firmware Version: 24.09 00:22:23.412 Recommended Arb Burst: 0 00:22:23.412 IEEE OUI Identifier: 00 00 00 00:22:23.412 Multi-path I/O 00:22:23.412 May have multiple subsystem ports: No 00:22:23.412 May have multiple controllers: No 00:22:23.412 Associated with SR-IOV VF: No 00:22:23.412 Max Data Transfer Size: 131072 00:22:23.412 Max Number of Namespaces: 0 00:22:23.412 Max Number of I/O Queues: 1024 00:22:23.412 NVMe Specification Version (VS): 1.3 00:22:23.412 NVMe Specification Version (Identify): 1.3 00:22:23.412 Maximum Queue Entries: 128 00:22:23.412 Contiguous Queues Required: Yes 00:22:23.412 Arbitration Mechanisms Supported 00:22:23.412 Weighted Round Robin: Not Supported 00:22:23.412 Vendor Specific: Not Supported 00:22:23.412 Reset Timeout: 15000 ms 00:22:23.412 Doorbell Stride: 4 bytes 00:22:23.412 NVM Subsystem Reset: Not Supported 00:22:23.412 Command Sets Supported 00:22:23.412 NVM Command Set: Supported 00:22:23.412 Boot Partition: Not Supported 00:22:23.412 Memory Page Size Minimum: 4096 bytes 00:22:23.412 Memory Page Size Maximum: 4096 bytes 00:22:23.412 Persistent Memory Region: Not Supported 00:22:23.412 Optional Asynchronous Events Supported 00:22:23.412 Namespace Attribute Notices: Not Supported 00:22:23.412 Firmware Activation Notices: Not Supported 00:22:23.412 ANA Change Notices: Not Supported 00:22:23.412 PLE Aggregate Log Change Notices: Not Supported 00:22:23.412 LBA Status Info Alert Notices: Not Supported 00:22:23.412 EGE Aggregate Log Change Notices: Not Supported 00:22:23.412 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.412 Zone Descriptor Change Notices: Not Supported 00:22:23.412 Discovery Log Change Notices: Supported 00:22:23.412 Controller Attributes 00:22:23.412 128-bit Host Identifier: Not Supported 00:22:23.412 Non-Operational Permissive Mode: Not Supported 00:22:23.412 NVM Sets: Not Supported 00:22:23.412 Read Recovery Levels: Not Supported 00:22:23.412 Endurance Groups: Not Supported 00:22:23.412 Predictable Latency Mode: Not Supported 00:22:23.412 Traffic Based Keep ALive: Not Supported 00:22:23.412 Namespace Granularity: Not Supported 00:22:23.412 SQ Associations: Not Supported 00:22:23.412 UUID List: Not Supported 00:22:23.412 Multi-Domain Subsystem: Not Supported 00:22:23.412 Fixed Capacity Management: Not Supported 00:22:23.412 Variable Capacity Management: Not Supported 00:22:23.412 Delete Endurance Group: Not Supported 00:22:23.412 Delete NVM Set: Not Supported 00:22:23.412 Extended LBA Formats Supported: Not Supported 00:22:23.412 Flexible Data Placement Supported: Not Supported 00:22:23.412 00:22:23.412 Controller Memory Buffer Support 00:22:23.412 ================================ 00:22:23.412 Supported: No 00:22:23.412 00:22:23.412 Persistent Memory Region Support 00:22:23.412 ================================ 00:22:23.412 Supported: No 00:22:23.412 00:22:23.412 Admin Command Set Attributes 00:22:23.412 ============================ 00:22:23.412 Security Send/Receive: Not Supported 00:22:23.412 Format NVM: Not Supported 00:22:23.412 Firmware Activate/Download: Not Supported 00:22:23.412 Namespace Management: Not Supported 00:22:23.412 Device Self-Test: Not Supported 00:22:23.413 Directives: Not Supported 00:22:23.413 NVMe-MI: Not Supported 00:22:23.413 Virtualization Management: Not Supported 00:22:23.413 Doorbell Buffer Config: Not Supported 00:22:23.413 Get LBA Status Capability: Not Supported 00:22:23.413 Command & Feature Lockdown Capability: Not Supported 00:22:23.413 Abort Command Limit: 1 00:22:23.413 Async Event Request Limit: 4 00:22:23.413 Number of Firmware Slots: N/A 00:22:23.413 Firmware Slot 1 Read-Only: N/A 00:22:23.413 Firmware Activation Without Reset: N/A 00:22:23.413 Multiple Update Detection Support: N/A 00:22:23.413 Firmware Update Granularity: No Information Provided 00:22:23.413 Per-Namespace SMART Log: No 00:22:23.413 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.413 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:23.413 Command Effects Log Page: Not Supported 00:22:23.413 Get Log Page Extended Data: Supported 00:22:23.413 Telemetry Log Pages: Not Supported 00:22:23.413 Persistent Event Log Pages: Not Supported 00:22:23.413 Supported Log Pages Log Page: May Support 00:22:23.413 Commands Supported & Effects Log Page: Not Supported 00:22:23.413 Feature Identifiers & Effects Log Page:May Support 00:22:23.413 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.413 Data Area 4 for Telemetry Log: Not Supported 00:22:23.413 Error Log Page Entries Supported: 128 00:22:23.413 Keep Alive: Not Supported 00:22:23.413 00:22:23.413 NVM Command Set Attributes 00:22:23.413 ========================== 00:22:23.413 Submission Queue Entry Size 00:22:23.413 Max: 1 00:22:23.413 Min: 1 00:22:23.413 Completion Queue Entry Size 00:22:23.413 Max: 1 00:22:23.413 Min: 1 00:22:23.413 Number of Namespaces: 0 00:22:23.413 Compare Command: Not Supported 00:22:23.413 Write Uncorrectable Command: Not Supported 00:22:23.413 Dataset Management Command: Not Supported 00:22:23.413 Write Zeroes Command: Not Supported 00:22:23.413 Set Features Save Field: Not Supported 00:22:23.413 Reservations: Not Supported 00:22:23.413 Timestamp: Not Supported 00:22:23.413 Copy: Not Supported 00:22:23.413 Volatile Write Cache: Not Present 00:22:23.413 Atomic Write Unit (Normal): 1 00:22:23.413 Atomic Write Unit (PFail): 1 00:22:23.413 Atomic Compare & Write Unit: 1 00:22:23.413 Fused Compare & Write: Supported 00:22:23.413 Scatter-Gather List 00:22:23.413 SGL Command Set: Supported 00:22:23.413 SGL Keyed: Supported 00:22:23.413 SGL Bit Bucket Descriptor: Not Supported 00:22:23.413 SGL Metadata Pointer: Not Supported 00:22:23.413 Oversized SGL: Not Supported 00:22:23.413 SGL Metadata Address: Not Supported 00:22:23.413 SGL Offset: Supported 00:22:23.413 Transport SGL Data Block: Not Supported 00:22:23.413 Replay Protected Memory Block: Not Supported 00:22:23.413 00:22:23.413 Firmware Slot Information 00:22:23.413 ========================= 00:22:23.413 Active slot: 0 00:22:23.413 00:22:23.413 00:22:23.413 Error Log 00:22:23.413 ========= 00:22:23.413 00:22:23.413 Active Namespaces 00:22:23.413 ================= 00:22:23.413 Discovery Log Page 00:22:23.413 ================== 00:22:23.413 Generation Counter: 2 00:22:23.413 Number of Records: 2 00:22:23.413 Record Format: 0 00:22:23.413 00:22:23.413 Discovery Log Entry 0 00:22:23.413 ---------------------- 00:22:23.413 Transport Type: 3 (TCP) 00:22:23.413 Address Family: 1 (IPv4) 00:22:23.413 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:23.413 Entry Flags: 00:22:23.413 Duplicate Returned Information: 1 00:22:23.413 Explicit Persistent Connection Support for Discovery: 1 00:22:23.413 Transport Requirements: 00:22:23.413 Secure Channel: Not Required 00:22:23.413 Port ID: 0 (0x0000) 00:22:23.413 Controller ID: 65535 (0xffff) 00:22:23.413 Admin Max SQ Size: 128 00:22:23.413 Transport Service Identifier: 4420 00:22:23.413 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:23.413 Transport Address: 10.0.0.2 00:22:23.413 Discovery Log Entry 1 00:22:23.413 ---------------------- 00:22:23.413 Transport Type: 3 (TCP) 00:22:23.413 Address Family: 1 (IPv4) 00:22:23.413 Subsystem Type: 2 (NVM Subsystem) 00:22:23.413 Entry Flags: 00:22:23.413 Duplicate Returned Information: 0 00:22:23.413 Explicit Persistent Connection Support for Discovery: 0 00:22:23.413 Transport Requirements: 00:22:23.413 Secure Channel: Not Required 00:22:23.413 Port ID: 0 (0x0000) 00:22:23.413 Controller ID: 65535 (0xffff) 00:22:23.413 Admin Max SQ Size: 128 00:22:23.413 Transport Service Identifier: 4420 00:22:23.413 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:23.413 Transport Address: 10.0.0.2 [2024-07-25 07:27:55.849537] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:23.413 [2024-07-25 07:27:55.849561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12123c0) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.849575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.413 [2024-07-25 07:27:55.849585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212540) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.849593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.413 [2024-07-25 07:27:55.849601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12126c0) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.849609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.413 [2024-07-25 07:27:55.849632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.849640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.413 [2024-07-25 07:27:55.849660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.849669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.849675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.413 [2024-07-25 07:27:55.849687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.413 [2024-07-25 07:27:55.849712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.413 [2024-07-25 07:27:55.849847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.413 [2024-07-25 07:27:55.849863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.413 [2024-07-25 07:27:55.849870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.849877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.849891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.849899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.849906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.413 [2024-07-25 07:27:55.849917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.413 [2024-07-25 07:27:55.849945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.413 [2024-07-25 07:27:55.850099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.413 [2024-07-25 07:27:55.850115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.413 [2024-07-25 07:27:55.850123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.850130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.850140] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:23.413 [2024-07-25 07:27:55.850150] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:23.413 [2024-07-25 07:27:55.850166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.850175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.850182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.413 [2024-07-25 07:27:55.850193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.413 [2024-07-25 07:27:55.850213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.413 [2024-07-25 07:27:55.850345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.413 [2024-07-25 07:27:55.850359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.413 [2024-07-25 07:27:55.850367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.850374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.413 [2024-07-25 07:27:55.850392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.850402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.413 [2024-07-25 07:27:55.850408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.413 [2024-07-25 07:27:55.850419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.414 [2024-07-25 07:27:55.850440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.414 [2024-07-25 07:27:55.850572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.414 [2024-07-25 07:27:55.850584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.414 [2024-07-25 07:27:55.850591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.850598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.414 [2024-07-25 07:27:55.850614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.850623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.850630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.414 [2024-07-25 07:27:55.850641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.414 [2024-07-25 07:27:55.850661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.414 [2024-07-25 07:27:55.850796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.414 [2024-07-25 07:27:55.850811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.414 [2024-07-25 07:27:55.850818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.850825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.414 [2024-07-25 07:27:55.850841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.850851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.850858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.414 [2024-07-25 07:27:55.850869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.414 [2024-07-25 07:27:55.850894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.414 [2024-07-25 07:27:55.851022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.414 [2024-07-25 07:27:55.851038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.414 [2024-07-25 07:27:55.851045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.851052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.414 [2024-07-25 07:27:55.851068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.851078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.851085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.414 [2024-07-25 07:27:55.851095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.414 [2024-07-25 07:27:55.851116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.414 [2024-07-25 07:27:55.855247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.414 [2024-07-25 07:27:55.855265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.414 [2024-07-25 07:27:55.855273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.855280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.414 [2024-07-25 07:27:55.855298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.855323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.855330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b2540) 00:22:23.414 [2024-07-25 07:27:55.855341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.414 [2024-07-25 07:27:55.855365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1212840, cid 3, qid 0 00:22:23.414 [2024-07-25 07:27:55.855497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.414 [2024-07-25 07:27:55.855509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.414 [2024-07-25 07:27:55.855517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.414 [2024-07-25 07:27:55.855524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1212840) on tqpair=0x11b2540 00:22:23.414 [2024-07-25 07:27:55.855538] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:23.414 00:22:23.414 07:27:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:23.414 [2024-07-25 07:27:55.893114] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:23.414 [2024-07-25 07:27:55.893161] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525611 ] 00:22:23.414 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.414 [2024-07-25 07:27:55.927171] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:23.414 [2024-07-25 07:27:55.927248] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.414 [2024-07-25 07:27:55.927261] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.414 [2024-07-25 07:27:55.927277] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.414 [2024-07-25 07:27:55.927294] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.414 [2024-07-25 07:27:55.931279] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:23.414 [2024-07-25 07:27:55.931330] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbfb540 0 00:22:23.675 [2024-07-25 07:27:55.938250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.675 [2024-07-25 07:27:55.938274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.675 [2024-07-25 07:27:55.938284] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.675 [2024-07-25 07:27:55.938290] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.675 [2024-07-25 07:27:55.938332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.938343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.938350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.938365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.675 [2024-07-25 07:27:55.938391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.945256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.945274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.675 [2024-07-25 07:27:55.945282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.675 [2024-07-25 07:27:55.945303] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.675 [2024-07-25 07:27:55.945314] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:23.675 [2024-07-25 07:27:55.945323] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:23.675 [2024-07-25 07:27:55.945342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.945369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-07-25 07:27:55.945393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.945552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.945567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.675 [2024-07-25 07:27:55.945574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.675 [2024-07-25 07:27:55.945593] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:23.675 [2024-07-25 07:27:55.945607] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:23.675 [2024-07-25 07:27:55.945620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.945644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-07-25 07:27:55.945666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.945788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.945802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.675 [2024-07-25 07:27:55.945809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.675 [2024-07-25 07:27:55.945825] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:23.675 [2024-07-25 07:27:55.945839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.675 [2024-07-25 07:27:55.945851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.945865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.945876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-07-25 07:27:55.945896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.946032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.946047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.675 [2024-07-25 07:27:55.946054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.675 [2024-07-25 07:27:55.946069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.675 [2024-07-25 07:27:55.946086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.946112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-07-25 07:27:55.946133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.946271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.946285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.675 [2024-07-25 07:27:55.946292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.675 [2024-07-25 07:27:55.946306] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:23.675 [2024-07-25 07:27:55.946315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:23.675 [2024-07-25 07:27:55.946328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.675 [2024-07-25 07:27:55.946438] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:23.675 [2024-07-25 07:27:55.946445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.675 [2024-07-25 07:27:55.946458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.946499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-07-25 07:27:55.946524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.946707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.946719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.675 [2024-07-25 07:27:55.946726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.675 [2024-07-25 07:27:55.946741] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.675 [2024-07-25 07:27:55.946757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.675 [2024-07-25 07:27:55.946772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.675 [2024-07-25 07:27:55.946783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-07-25 07:27:55.946803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.675 [2024-07-25 07:27:55.946937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.675 [2024-07-25 07:27:55.946952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.676 [2024-07-25 07:27:55.946959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.946965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.676 [2024-07-25 07:27:55.946973] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.676 [2024-07-25 07:27:55.946981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.946995] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:23.676 [2024-07-25 07:27:55.947008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.947022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.947030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.947041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.676 [2024-07-25 07:27:55.947062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.676 [2024-07-25 07:27:55.947282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.676 [2024-07-25 07:27:55.947298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.676 [2024-07-25 07:27:55.947305] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.947311] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=4096, cccid=0 00:22:23.676 [2024-07-25 07:27:55.947319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5b3c0) on tqpair(0xbfb540): expected_datao=0, payload_size=4096 00:22:23.676 [2024-07-25 07:27:55.947326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.947346] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.947355] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.676 [2024-07-25 07:27:55.989285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.676 [2024-07-25 07:27:55.989293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.676 [2024-07-25 07:27:55.989316] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:23.676 [2024-07-25 07:27:55.989325] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:23.676 [2024-07-25 07:27:55.989332] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:23.676 [2024-07-25 07:27:55.989339] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:23.676 [2024-07-25 07:27:55.989346] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:23.676 [2024-07-25 07:27:55.989354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.989369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.989385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.989412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.676 [2024-07-25 07:27:55.989436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.676 [2024-07-25 07:27:55.989623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.676 [2024-07-25 07:27:55.989638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.676 [2024-07-25 07:27:55.989645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.676 [2024-07-25 07:27:55.989663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.989687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.676 [2024-07-25 07:27:55.989698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.989719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.676 [2024-07-25 07:27:55.989729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.989765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.676 [2024-07-25 07:27:55.989775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.989795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.676 [2024-07-25 07:27:55.989808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.989827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.989855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.989862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.989873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.676 [2024-07-25 07:27:55.989894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b3c0, cid 0, qid 0 00:22:23.676 [2024-07-25 07:27:55.989919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b540, cid 1, qid 0 00:22:23.676 [2024-07-25 07:27:55.989927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b6c0, cid 2, qid 0 00:22:23.676 [2024-07-25 07:27:55.989935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.676 [2024-07-25 07:27:55.989942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.676 [2024-07-25 07:27:55.990123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.676 [2024-07-25 07:27:55.990138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.676 [2024-07-25 07:27:55.990145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.676 [2024-07-25 07:27:55.990161] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:23.676 [2024-07-25 07:27:55.990170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.990189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.990203] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.990229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.990262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.676 [2024-07-25 07:27:55.990298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.676 [2024-07-25 07:27:55.990485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.676 [2024-07-25 07:27:55.990497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.676 [2024-07-25 07:27:55.990504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.676 [2024-07-25 07:27:55.990580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.990601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:23.676 [2024-07-25 07:27:55.990632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.676 [2024-07-25 07:27:55.990650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.676 [2024-07-25 07:27:55.990675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.676 [2024-07-25 07:27:55.990887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.676 [2024-07-25 07:27:55.990900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.676 [2024-07-25 07:27:55.990907] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990913] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=4096, cccid=4 00:22:23.676 [2024-07-25 07:27:55.990921] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5b9c0) on tqpair(0xbfb540): expected_datao=0, payload_size=4096 00:22:23.676 [2024-07-25 07:27:55.990929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990939] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.676 [2024-07-25 07:27:55.990946] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.991033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.991040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.991065] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:23.677 [2024-07-25 07:27:55.991089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.991108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.991121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.991139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.991176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.677 [2024-07-25 07:27:55.991431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.677 [2024-07-25 07:27:55.991448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.677 [2024-07-25 07:27:55.991455] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991461] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=4096, cccid=4 00:22:23.677 [2024-07-25 07:27:55.991468] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5b9c0) on tqpair(0xbfb540): expected_datao=0, payload_size=4096 00:22:23.677 [2024-07-25 07:27:55.991476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991486] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991493] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.991580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.991587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.991621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.991641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.991656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.991678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.991700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.677 [2024-07-25 07:27:55.991948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.677 [2024-07-25 07:27:55.991963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.677 [2024-07-25 07:27:55.991970] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.991977] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=4096, cccid=4 00:22:23.677 [2024-07-25 07:27:55.991984] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5b9c0) on tqpair(0xbfb540): expected_datao=0, payload_size=4096 00:22:23.677 [2024-07-25 07:27:55.991992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992002] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992009] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.992058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.992065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.992086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992160] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:23.677 [2024-07-25 07:27:55.992168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:23.677 [2024-07-25 07:27:55.992177] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:23.677 [2024-07-25 07:27:55.992196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.992216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.992227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.992274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-07-25 07:27:55.992302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.677 [2024-07-25 07:27:55.992329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5bb40, cid 5, qid 0 00:22:23.677 [2024-07-25 07:27:55.992547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.992562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.992569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.992587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.992596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.992602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5bb40) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.992639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.992659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.992679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5bb40, cid 5, qid 0 00:22:23.677 [2024-07-25 07:27:55.992860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.992872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.992879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5bb40) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.992902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.992911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.992921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.992941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5bb40, cid 5, qid 0 00:22:23.677 [2024-07-25 07:27:55.993077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.993092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.993099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.993106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5bb40) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.993122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.993131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.993141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.993162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5bb40, cid 5, qid 0 00:22:23.677 [2024-07-25 07:27:55.997253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.677 [2024-07-25 07:27:55.997269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.677 [2024-07-25 07:27:55.997276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.997283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5bb40) on tqpair=0xbfb540 00:22:23.677 [2024-07-25 07:27:55.997309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.997321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.997331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.997347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.997355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfb540) 00:22:23.677 [2024-07-25 07:27:55.997364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-07-25 07:27:55.997375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.677 [2024-07-25 07:27:55.997382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xbfb540) 00:22:23.678 [2024-07-25 07:27:55.997391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-07-25 07:27:55.997402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbfb540) 00:22:23.678 [2024-07-25 07:27:55.997418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-07-25 07:27:55.997441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5bb40, cid 5, qid 0 00:22:23.678 [2024-07-25 07:27:55.997467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b9c0, cid 4, qid 0 00:22:23.678 [2024-07-25 07:27:55.997475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5bcc0, cid 6, qid 0 00:22:23.678 [2024-07-25 07:27:55.997483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5be40, cid 7, qid 0 00:22:23.678 [2024-07-25 07:27:55.997730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.678 [2024-07-25 07:27:55.997746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.678 [2024-07-25 07:27:55.997752] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997758] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=8192, cccid=5 00:22:23.678 [2024-07-25 07:27:55.997766] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5bb40) on tqpair(0xbfb540): expected_datao=0, payload_size=8192 00:22:23.678 [2024-07-25 07:27:55.997773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997887] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997898] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.678 [2024-07-25 07:27:55.997916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.678 [2024-07-25 07:27:55.997922] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997928] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=512, cccid=4 00:22:23.678 [2024-07-25 07:27:55.997936] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5b9c0) on tqpair(0xbfb540): expected_datao=0, payload_size=512 00:22:23.678 [2024-07-25 07:27:55.997943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997952] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997960] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.678 [2024-07-25 07:27:55.997977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.678 [2024-07-25 07:27:55.997983] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.997990] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=512, cccid=6 00:22:23.678 [2024-07-25 07:27:55.997997] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5bcc0) on tqpair(0xbfb540): expected_datao=0, payload_size=512 00:22:23.678 [2024-07-25 07:27:55.998008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998018] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998025] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.678 [2024-07-25 07:27:55.998042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.678 [2024-07-25 07:27:55.998048] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998054] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfb540): datao=0, datal=4096, cccid=7 00:22:23.678 [2024-07-25 07:27:55.998062] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc5be40) on tqpair(0xbfb540): expected_datao=0, payload_size=4096 00:22:23.678 [2024-07-25 07:27:55.998069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998078] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998086] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.678 [2024-07-25 07:27:55.998107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.678 [2024-07-25 07:27:55.998113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5bb40) on tqpair=0xbfb540 00:22:23.678 [2024-07-25 07:27:55.998138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.678 [2024-07-25 07:27:55.998150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.678 [2024-07-25 07:27:55.998156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b9c0) on tqpair=0xbfb540 00:22:23.678 [2024-07-25 07:27:55.998194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.678 [2024-07-25 07:27:55.998204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.678 [2024-07-25 07:27:55.998210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5bcc0) on tqpair=0xbfb540 00:22:23.678 [2024-07-25 07:27:55.998248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.678 [2024-07-25 07:27:55.998259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.678 [2024-07-25 07:27:55.998265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.678 [2024-07-25 07:27:55.998271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5be40) on tqpair=0xbfb540 00:22:23.678 ===================================================== 00:22:23.678 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.678 ===================================================== 00:22:23.678 Controller Capabilities/Features 00:22:23.678 ================================ 00:22:23.678 Vendor ID: 8086 00:22:23.678 Subsystem Vendor ID: 8086 00:22:23.678 Serial Number: SPDK00000000000001 00:22:23.678 Model Number: SPDK bdev Controller 00:22:23.678 Firmware Version: 24.09 00:22:23.678 Recommended Arb Burst: 6 00:22:23.678 IEEE OUI Identifier: e4 d2 5c 00:22:23.678 Multi-path I/O 00:22:23.678 May have multiple subsystem ports: Yes 00:22:23.678 May have multiple controllers: Yes 00:22:23.678 Associated with SR-IOV VF: No 00:22:23.678 Max Data Transfer Size: 131072 00:22:23.678 Max Number of Namespaces: 32 00:22:23.678 Max Number of I/O Queues: 127 00:22:23.678 NVMe Specification Version (VS): 1.3 00:22:23.678 NVMe Specification Version (Identify): 1.3 00:22:23.678 Maximum Queue Entries: 128 00:22:23.678 Contiguous Queues Required: Yes 00:22:23.678 Arbitration Mechanisms Supported 00:22:23.678 Weighted Round Robin: Not Supported 00:22:23.678 Vendor Specific: Not Supported 00:22:23.678 Reset Timeout: 15000 ms 00:22:23.678 Doorbell Stride: 4 bytes 00:22:23.678 NVM Subsystem Reset: Not Supported 00:22:23.678 Command Sets Supported 00:22:23.678 NVM Command Set: Supported 00:22:23.678 Boot Partition: Not Supported 00:22:23.678 Memory Page Size Minimum: 4096 bytes 00:22:23.678 Memory Page Size Maximum: 4096 bytes 00:22:23.678 Persistent Memory Region: Not Supported 00:22:23.678 Optional Asynchronous Events Supported 00:22:23.678 Namespace Attribute Notices: Supported 00:22:23.678 Firmware Activation Notices: Not Supported 00:22:23.678 ANA Change Notices: Not Supported 00:22:23.678 PLE Aggregate Log Change Notices: Not Supported 00:22:23.678 LBA Status Info Alert Notices: Not Supported 00:22:23.678 EGE Aggregate Log Change Notices: Not Supported 00:22:23.678 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.678 Zone Descriptor Change Notices: Not Supported 00:22:23.678 Discovery Log Change Notices: Not Supported 00:22:23.678 Controller Attributes 00:22:23.678 128-bit Host Identifier: Supported 00:22:23.678 Non-Operational Permissive Mode: Not Supported 00:22:23.678 NVM Sets: Not Supported 00:22:23.678 Read Recovery Levels: Not Supported 00:22:23.678 Endurance Groups: Not Supported 00:22:23.678 Predictable Latency Mode: Not Supported 00:22:23.678 Traffic Based Keep ALive: Not Supported 00:22:23.678 Namespace Granularity: Not Supported 00:22:23.678 SQ Associations: Not Supported 00:22:23.678 UUID List: Not Supported 00:22:23.678 Multi-Domain Subsystem: Not Supported 00:22:23.678 Fixed Capacity Management: Not Supported 00:22:23.678 Variable Capacity Management: Not Supported 00:22:23.678 Delete Endurance Group: Not Supported 00:22:23.678 Delete NVM Set: Not Supported 00:22:23.678 Extended LBA Formats Supported: Not Supported 00:22:23.678 Flexible Data Placement Supported: Not Supported 00:22:23.678 00:22:23.678 Controller Memory Buffer Support 00:22:23.678 ================================ 00:22:23.678 Supported: No 00:22:23.678 00:22:23.678 Persistent Memory Region Support 00:22:23.678 ================================ 00:22:23.678 Supported: No 00:22:23.678 00:22:23.678 Admin Command Set Attributes 00:22:23.678 ============================ 00:22:23.678 Security Send/Receive: Not Supported 00:22:23.678 Format NVM: Not Supported 00:22:23.678 Firmware Activate/Download: Not Supported 00:22:23.678 Namespace Management: Not Supported 00:22:23.678 Device Self-Test: Not Supported 00:22:23.678 Directives: Not Supported 00:22:23.678 NVMe-MI: Not Supported 00:22:23.678 Virtualization Management: Not Supported 00:22:23.678 Doorbell Buffer Config: Not Supported 00:22:23.678 Get LBA Status Capability: Not Supported 00:22:23.678 Command & Feature Lockdown Capability: Not Supported 00:22:23.678 Abort Command Limit: 4 00:22:23.678 Async Event Request Limit: 4 00:22:23.678 Number of Firmware Slots: N/A 00:22:23.679 Firmware Slot 1 Read-Only: N/A 00:22:23.679 Firmware Activation Without Reset: N/A 00:22:23.679 Multiple Update Detection Support: N/A 00:22:23.679 Firmware Update Granularity: No Information Provided 00:22:23.679 Per-Namespace SMART Log: No 00:22:23.679 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.679 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:23.679 Command Effects Log Page: Supported 00:22:23.679 Get Log Page Extended Data: Supported 00:22:23.679 Telemetry Log Pages: Not Supported 00:22:23.679 Persistent Event Log Pages: Not Supported 00:22:23.679 Supported Log Pages Log Page: May Support 00:22:23.679 Commands Supported & Effects Log Page: Not Supported 00:22:23.679 Feature Identifiers & Effects Log Page:May Support 00:22:23.679 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.679 Data Area 4 for Telemetry Log: Not Supported 00:22:23.679 Error Log Page Entries Supported: 128 00:22:23.679 Keep Alive: Supported 00:22:23.679 Keep Alive Granularity: 10000 ms 00:22:23.679 00:22:23.679 NVM Command Set Attributes 00:22:23.679 ========================== 00:22:23.679 Submission Queue Entry Size 00:22:23.679 Max: 64 00:22:23.679 Min: 64 00:22:23.679 Completion Queue Entry Size 00:22:23.679 Max: 16 00:22:23.679 Min: 16 00:22:23.679 Number of Namespaces: 32 00:22:23.679 Compare Command: Supported 00:22:23.679 Write Uncorrectable Command: Not Supported 00:22:23.679 Dataset Management Command: Supported 00:22:23.679 Write Zeroes Command: Supported 00:22:23.679 Set Features Save Field: Not Supported 00:22:23.679 Reservations: Supported 00:22:23.679 Timestamp: Not Supported 00:22:23.679 Copy: Supported 00:22:23.679 Volatile Write Cache: Present 00:22:23.679 Atomic Write Unit (Normal): 1 00:22:23.679 Atomic Write Unit (PFail): 1 00:22:23.679 Atomic Compare & Write Unit: 1 00:22:23.679 Fused Compare & Write: Supported 00:22:23.679 Scatter-Gather List 00:22:23.679 SGL Command Set: Supported 00:22:23.679 SGL Keyed: Supported 00:22:23.679 SGL Bit Bucket Descriptor: Not Supported 00:22:23.679 SGL Metadata Pointer: Not Supported 00:22:23.679 Oversized SGL: Not Supported 00:22:23.679 SGL Metadata Address: Not Supported 00:22:23.679 SGL Offset: Supported 00:22:23.679 Transport SGL Data Block: Not Supported 00:22:23.679 Replay Protected Memory Block: Not Supported 00:22:23.679 00:22:23.679 Firmware Slot Information 00:22:23.679 ========================= 00:22:23.679 Active slot: 1 00:22:23.679 Slot 1 Firmware Revision: 24.09 00:22:23.679 00:22:23.679 00:22:23.679 Commands Supported and Effects 00:22:23.679 ============================== 00:22:23.679 Admin Commands 00:22:23.679 -------------- 00:22:23.679 Get Log Page (02h): Supported 00:22:23.679 Identify (06h): Supported 00:22:23.679 Abort (08h): Supported 00:22:23.679 Set Features (09h): Supported 00:22:23.679 Get Features (0Ah): Supported 00:22:23.679 Asynchronous Event Request (0Ch): Supported 00:22:23.679 Keep Alive (18h): Supported 00:22:23.679 I/O Commands 00:22:23.679 ------------ 00:22:23.679 Flush (00h): Supported LBA-Change 00:22:23.679 Write (01h): Supported LBA-Change 00:22:23.679 Read (02h): Supported 00:22:23.679 Compare (05h): Supported 00:22:23.679 Write Zeroes (08h): Supported LBA-Change 00:22:23.679 Dataset Management (09h): Supported LBA-Change 00:22:23.679 Copy (19h): Supported LBA-Change 00:22:23.679 00:22:23.679 Error Log 00:22:23.679 ========= 00:22:23.679 00:22:23.679 Arbitration 00:22:23.679 =========== 00:22:23.679 Arbitration Burst: 1 00:22:23.679 00:22:23.679 Power Management 00:22:23.679 ================ 00:22:23.679 Number of Power States: 1 00:22:23.679 Current Power State: Power State #0 00:22:23.679 Power State #0: 00:22:23.679 Max Power: 0.00 W 00:22:23.679 Non-Operational State: Operational 00:22:23.679 Entry Latency: Not Reported 00:22:23.679 Exit Latency: Not Reported 00:22:23.679 Relative Read Throughput: 0 00:22:23.679 Relative Read Latency: 0 00:22:23.679 Relative Write Throughput: 0 00:22:23.679 Relative Write Latency: 0 00:22:23.679 Idle Power: Not Reported 00:22:23.679 Active Power: Not Reported 00:22:23.679 Non-Operational Permissive Mode: Not Supported 00:22:23.679 00:22:23.679 Health Information 00:22:23.679 ================== 00:22:23.679 Critical Warnings: 00:22:23.679 Available Spare Space: OK 00:22:23.679 Temperature: OK 00:22:23.679 Device Reliability: OK 00:22:23.679 Read Only: No 00:22:23.679 Volatile Memory Backup: OK 00:22:23.679 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:23.679 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:23.679 Available Spare: 0% 00:22:23.679 Available Spare Threshold: 0% 00:22:23.679 Life Percentage Used:[2024-07-25 07:27:55.998402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.679 [2024-07-25 07:27:55.998414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbfb540) 00:22:23.679 [2024-07-25 07:27:55.998425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-07-25 07:27:55.998447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5be40, cid 7, qid 0 00:22:23.679 [2024-07-25 07:27:55.998634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.679 [2024-07-25 07:27:55.998650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.679 [2024-07-25 07:27:55.998657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.679 [2024-07-25 07:27:55.998664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5be40) on tqpair=0xbfb540 00:22:23.679 [2024-07-25 07:27:55.998712] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:23.679 [2024-07-25 07:27:55.998732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b3c0) on tqpair=0xbfb540 00:22:23.679 [2024-07-25 07:27:55.998743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-07-25 07:27:55.998755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b540) on tqpair=0xbfb540 00:22:23.679 [2024-07-25 07:27:55.998764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-07-25 07:27:55.998772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b6c0) on tqpair=0xbfb540 00:22:23.679 [2024-07-25 07:27:55.998779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-07-25 07:27:55.998787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.679 [2024-07-25 07:27:55.998794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-07-25 07:27:55.998807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.679 [2024-07-25 07:27:55.998816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.679 [2024-07-25 07:27:55.998822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.679 [2024-07-25 07:27:55.998833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-07-25 07:27:55.998855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.679 [2024-07-25 07:27:55.999024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.679 [2024-07-25 07:27:55.999039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.679 [2024-07-25 07:27:55.999045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:55.999063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:55.999087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:55.999114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:55.999265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:55.999281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:55.999287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:55.999302] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:23.680 [2024-07-25 07:27:55.999310] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:23.680 [2024-07-25 07:27:55.999326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:55.999351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:55.999373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:55.999556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:55.999568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:55.999575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:55.999601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:55.999628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:55.999649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:55.999791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:55.999806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:55.999813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:55.999836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:55.999851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:55.999862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:55.999882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.000065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.000077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.000084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.000106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:56.000131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:56.000152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.000404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.000420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.000427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.000451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:56.000477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:56.000498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.000621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.000636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.000642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.000665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:56.000695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:56.000716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.000835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.000850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.000857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.000880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.000896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:56.000906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:56.000926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.001096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.001111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.001118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.001124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.001140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.001150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.001156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:56.001166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:56.001187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.005254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.005270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.005278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.005284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.005302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.005311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.005317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfb540) 00:22:23.680 [2024-07-25 07:27:56.005328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-07-25 07:27:56.005349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc5b840, cid 3, qid 0 00:22:23.680 [2024-07-25 07:27:56.005517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.680 [2024-07-25 07:27:56.005532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.680 [2024-07-25 07:27:56.005539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.680 [2024-07-25 07:27:56.005546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc5b840) on tqpair=0xbfb540 00:22:23.680 [2024-07-25 07:27:56.005559] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:23.680 0% 00:22:23.680 Data Units Read: 0 00:22:23.680 Data Units Written: 0 00:22:23.680 Host Read Commands: 0 00:22:23.680 Host Write Commands: 0 00:22:23.680 Controller Busy Time: 0 minutes 00:22:23.680 Power Cycles: 0 00:22:23.680 Power On Hours: 0 hours 00:22:23.680 Unsafe Shutdowns: 0 00:22:23.680 Unrecoverable Media Errors: 0 00:22:23.680 Lifetime Error Log Entries: 0 00:22:23.680 Warning Temperature Time: 0 minutes 00:22:23.680 Critical Temperature Time: 0 minutes 00:22:23.680 00:22:23.680 Number of Queues 00:22:23.681 ================ 00:22:23.681 Number of I/O Submission Queues: 127 00:22:23.681 Number of I/O Completion Queues: 127 00:22:23.681 00:22:23.681 Active Namespaces 00:22:23.681 ================= 00:22:23.681 Namespace ID:1 00:22:23.681 Error Recovery Timeout: Unlimited 00:22:23.681 Command Set Identifier: NVM (00h) 00:22:23.681 Deallocate: Supported 00:22:23.681 Deallocated/Unwritten Error: Not Supported 00:22:23.681 Deallocated Read Value: Unknown 00:22:23.681 Deallocate in Write Zeroes: Not Supported 00:22:23.681 Deallocated Guard Field: 0xFFFF 00:22:23.681 Flush: Supported 00:22:23.681 Reservation: Supported 00:22:23.681 Namespace Sharing Capabilities: Multiple Controllers 00:22:23.681 Size (in LBAs): 131072 (0GiB) 00:22:23.681 Capacity (in LBAs): 131072 (0GiB) 00:22:23.681 Utilization (in LBAs): 131072 (0GiB) 00:22:23.681 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:23.681 EUI64: ABCDEF0123456789 00:22:23.681 UUID: e51dfe5c-42d2-4df3-9a42-94a617342b56 00:22:23.681 Thin Provisioning: Not Supported 00:22:23.681 Per-NS Atomic Units: Yes 00:22:23.681 Atomic Boundary Size (Normal): 0 00:22:23.681 Atomic Boundary Size (PFail): 0 00:22:23.681 Atomic Boundary Offset: 0 00:22:23.681 Maximum Single Source Range Length: 65535 00:22:23.681 Maximum Copy Length: 65535 00:22:23.681 Maximum Source Range Count: 1 00:22:23.681 NGUID/EUI64 Never Reused: No 00:22:23.681 Namespace Write Protected: No 00:22:23.681 Number of LBA Formats: 1 00:22:23.681 Current LBA Format: LBA Format #00 00:22:23.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:23.681 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.681 rmmod nvme_tcp 00:22:23.681 rmmod nvme_fabrics 00:22:23.681 rmmod nvme_keyring 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2525463 ']' 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2525463 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2525463 ']' 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2525463 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2525463 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2525463' 00:22:23.681 killing process with pid 2525463 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2525463 00:22:23.681 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2525463 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.940 07:27:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.473 00:22:26.473 real 0m5.340s 00:22:26.473 user 0m4.424s 00:22:26.473 sys 0m1.823s 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.473 ************************************ 00:22:26.473 END TEST nvmf_identify 00:22:26.473 ************************************ 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.473 ************************************ 00:22:26.473 START TEST nvmf_perf 00:22:26.473 ************************************ 00:22:26.473 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.473 * Looking for test storage... 00:22:26.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.474 07:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:28.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:28.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:28.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:28.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:28.373 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:22:28.374 00:22:28.374 --- 10.0.0.2 ping statistics --- 00:22:28.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.374 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:28.374 00:22:28.374 --- 10.0.0.1 ping statistics --- 00:22:28.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.374 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2527539 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2527539 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2527539 ']' 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.374 07:28:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:28.374 [2024-07-25 07:28:00.774198] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:28.374 [2024-07-25 07:28:00.774294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.374 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.374 [2024-07-25 07:28:00.837425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.632 [2024-07-25 07:28:00.945653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.632 [2024-07-25 07:28:00.945702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.632 [2024-07-25 07:28:00.945725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.632 [2024-07-25 07:28:00.945736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.632 [2024-07-25 07:28:00.945745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.632 [2024-07-25 07:28:00.945823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.632 [2024-07-25 07:28:00.945888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.632 [2024-07-25 07:28:00.945953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.632 [2024-07-25 07:28:00.945955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:28.632 07:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:31.905 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:31.905 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:32.162 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:32.162 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:32.419 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:32.419 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:32.419 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:32.419 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:32.419 07:28:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.676 [2024-07-25 07:28:04.979323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.676 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.933 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:32.933 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.190 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:33.190 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:33.447 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.447 [2024-07-25 07:28:05.963024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.704 07:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:33.704 07:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:33.704 07:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:33.704 07:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:33.704 07:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:35.075 Initializing NVMe Controllers 00:22:35.075 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:35.075 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:35.075 Initialization complete. Launching workers. 00:22:35.075 ======================================================== 00:22:35.075 Latency(us) 00:22:35.075 Device Information : IOPS MiB/s Average min max 00:22:35.075 PCIE (0000:88:00.0) NSID 1 from core 0: 83140.23 324.77 384.42 44.75 5249.84 00:22:35.075 ======================================================== 00:22:35.075 Total : 83140.23 324.77 384.42 44.75 5249.84 00:22:35.075 00:22:35.075 07:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:35.075 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.447 Initializing NVMe Controllers 00:22:36.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:36.447 Initialization complete. Launching workers. 00:22:36.447 ======================================================== 00:22:36.447 Latency(us) 00:22:36.447 Device Information : IOPS MiB/s Average min max 00:22:36.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 123.00 0.48 8273.43 217.59 45150.01 00:22:36.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15264.48 4996.74 47902.17 00:22:36.447 ======================================================== 00:22:36.447 Total : 189.00 0.74 10714.75 217.59 47902.17 00:22:36.447 00:22:36.447 07:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:36.447 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.344 Initializing NVMe Controllers 00:22:38.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:38.344 Initialization complete. Launching workers. 00:22:38.344 ======================================================== 00:22:38.344 Latency(us) 00:22:38.344 Device Information : IOPS MiB/s Average min max 00:22:38.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8328.19 32.53 3842.76 626.63 7687.74 00:22:38.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3921.79 15.32 8201.80 5047.56 15978.57 00:22:38.344 ======================================================== 00:22:38.344 Total : 12249.98 47.85 5238.30 626.63 15978.57 00:22:38.344 00:22:38.344 07:28:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:38.344 07:28:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:38.344 07:28:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.344 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.870 Initializing NVMe Controllers 00:22:40.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.870 Controller IO queue size 128, less than required. 00:22:40.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.870 Controller IO queue size 128, less than required. 00:22:40.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.870 Initialization complete. Launching workers. 00:22:40.870 ======================================================== 00:22:40.870 Latency(us) 00:22:40.870 Device Information : IOPS MiB/s Average min max 00:22:40.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1211.32 302.83 108457.41 72324.55 166304.60 00:22:40.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.91 150.23 222927.33 70741.08 349924.76 00:22:40.870 ======================================================== 00:22:40.870 Total : 1812.23 453.06 146414.06 70741.08 349924.76 00:22:40.870 00:22:40.870 07:28:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:40.870 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.870 No valid NVMe controllers or AIO or URING devices found 00:22:40.870 Initializing NVMe Controllers 00:22:40.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.870 Controller IO queue size 128, less than required. 00:22:40.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.870 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:40.870 Controller IO queue size 128, less than required. 00:22:40.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.870 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:40.870 WARNING: Some requested NVMe devices were skipped 00:22:40.870 07:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:40.870 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.416 Initializing NVMe Controllers 00:22:43.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.416 Controller IO queue size 128, less than required. 00:22:43.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.416 Controller IO queue size 128, less than required. 00:22:43.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.416 Initialization complete. Launching workers. 00:22:43.416 00:22:43.416 ==================== 00:22:43.416 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:43.416 TCP transport: 00:22:43.416 polls: 17234 00:22:43.416 idle_polls: 6277 00:22:43.416 sock_completions: 10957 00:22:43.416 nvme_completions: 4735 00:22:43.416 submitted_requests: 7106 00:22:43.416 queued_requests: 1 00:22:43.416 00:22:43.416 ==================== 00:22:43.416 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:43.416 TCP transport: 00:22:43.416 polls: 20334 00:22:43.416 idle_polls: 9891 00:22:43.416 sock_completions: 10443 00:22:43.416 nvme_completions: 4799 00:22:43.416 submitted_requests: 7324 00:22:43.416 queued_requests: 1 00:22:43.416 ======================================================== 00:22:43.416 Latency(us) 00:22:43.416 Device Information : IOPS MiB/s Average min max 00:22:43.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1183.48 295.87 111338.38 66671.51 156479.67 00:22:43.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1199.48 299.87 108289.42 50752.97 160265.21 00:22:43.416 ======================================================== 00:22:43.416 Total : 2382.97 595.74 109803.66 50752.97 160265.21 00:22:43.416 00:22:43.416 07:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:43.416 07:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.980 rmmod nvme_tcp 00:22:43.980 rmmod nvme_fabrics 00:22:43.980 rmmod nvme_keyring 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2527539 ']' 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2527539 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2527539 ']' 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2527539 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2527539 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2527539' 00:22:43.980 killing process with pid 2527539 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2527539 00:22:43.980 07:28:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2527539 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.876 07:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.774 07:28:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.774 00:22:47.774 real 0m21.475s 00:22:47.774 user 1m5.642s 00:22:47.774 sys 0m5.192s 00:22:47.774 07:28:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.774 07:28:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:47.774 ************************************ 00:22:47.774 END TEST nvmf_perf 00:22:47.774 ************************************ 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.774 ************************************ 00:22:47.774 START TEST nvmf_fio_host 00:22:47.774 ************************************ 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:47.774 * Looking for test storage... 00:22:47.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.774 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.775 07:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:49.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:49.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:49.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:49.673 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:49.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:22:49.674 00:22:49.674 --- 10.0.0.2 ping statistics --- 00:22:49.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.674 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:22:49.674 00:22:49.674 --- 10.0.0.1 ping statistics --- 00:22:49.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.674 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.674 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2531502 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2531502 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2531502 ']' 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.931 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.931 [2024-07-25 07:28:22.247515] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:22:49.931 [2024-07-25 07:28:22.247612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.931 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.932 [2024-07-25 07:28:22.310973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.932 [2024-07-25 07:28:22.422184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.932 [2024-07-25 07:28:22.422261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.932 [2024-07-25 07:28:22.422277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.932 [2024-07-25 07:28:22.422288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.932 [2024-07-25 07:28:22.422298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.932 [2024-07-25 07:28:22.422381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.932 [2024-07-25 07:28:22.422444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.932 [2024-07-25 07:28:22.422511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.932 [2024-07-25 07:28:22.422513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.189 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.189 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:50.189 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:50.447 [2024-07-25 07:28:22.821577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.447 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:50.447 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.447 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.447 07:28:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:50.704 Malloc1 00:22:50.704 07:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.962 07:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:51.218 07:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.475 [2024-07-25 07:28:23.913312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.475 07:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:51.732 07:28:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:51.989 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:51.989 fio-3.35 00:22:51.989 Starting 1 thread 00:22:51.989 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.513 00:22:54.513 test: (groupid=0, jobs=1): err= 0: pid=2531865: Thu Jul 25 07:28:26 2024 00:22:54.513 read: IOPS=9135, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec) 00:22:54.513 slat (nsec): min=1914, max=163291, avg=2524.13, stdev=1954.89 00:22:54.513 clat (usec): min=2404, max=13909, avg=7745.02, stdev=591.35 00:22:54.513 lat (usec): min=2432, max=13911, avg=7747.54, stdev=591.23 00:22:54.513 clat percentiles (usec): 00:22:54.513 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:22:54.513 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:22:54.513 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:22:54.513 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11994], 99.95th=[12780], 00:22:54.513 | 99.99th=[13829] 00:22:54.513 bw ( KiB/s): min=35728, max=36912, per=99.89%, avg=36502.00, stdev=545.46, samples=4 00:22:54.513 iops : min= 8932, max= 9228, avg=9125.50, stdev=136.37, samples=4 00:22:54.513 write: IOPS=9147, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2006msec); 0 zone resets 00:22:54.513 slat (usec): min=2, max=133, avg= 2.64, stdev= 1.48 00:22:54.513 clat (usec): min=1414, max=10982, avg=6226.32, stdev=500.78 00:22:54.513 lat (usec): min=1423, max=10984, avg=6228.96, stdev=500.74 00:22:54.513 clat percentiles (usec): 00:22:54.513 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:22:54.513 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:22:54.513 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:22:54.513 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 9765], 99.95th=[10421], 00:22:54.513 | 99.99th=[10945] 00:22:54.513 bw ( KiB/s): min=36352, max=36800, per=100.00%, avg=36592.00, stdev=189.43, samples=4 00:22:54.513 iops : min= 9088, max= 9200, avg=9148.00, stdev=47.36, samples=4 00:22:54.513 lat (msec) : 2=0.02%, 4=0.11%, 10=99.71%, 20=0.16% 00:22:54.513 cpu : usr=56.51%, sys=37.96%, ctx=71, majf=0, minf=38 00:22:54.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:54.513 issued rwts: total=18325,18349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:54.513 00:22:54.513 Run status group 0 (all jobs): 00:22:54.513 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:22:54.513 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2006-2006msec 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:54.514 07:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:54.514 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:54.514 fio-3.35 00:22:54.514 Starting 1 thread 00:22:54.771 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.297 00:22:57.297 test: (groupid=0, jobs=1): err= 0: pid=2532312: Thu Jul 25 07:28:29 2024 00:22:57.297 read: IOPS=8138, BW=127MiB/s (133MB/s)(255MiB/2009msec) 00:22:57.297 slat (nsec): min=2741, max=93662, avg=3754.09, stdev=1618.10 00:22:57.297 clat (usec): min=2241, max=52238, avg=9456.78, stdev=4042.22 00:22:57.297 lat (usec): min=2245, max=52241, avg=9460.53, stdev=4042.20 00:22:57.297 clat percentiles (usec): 00:22:57.297 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 7308], 00:22:57.297 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:22:57.297 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12125], 95.00th=[13435], 00:22:57.298 | 99.00th=[16319], 99.50th=[45876], 99.90th=[51119], 99.95th=[51643], 00:22:57.298 | 99.99th=[52167] 00:22:57.298 bw ( KiB/s): min=57696, max=77024, per=51.46%, avg=67016.00, stdev=8457.34, samples=4 00:22:57.298 iops : min= 3606, max= 4814, avg=4188.50, stdev=528.58, samples=4 00:22:57.298 write: IOPS=4848, BW=75.8MiB/s (79.4MB/s)(137MiB/1804msec); 0 zone resets 00:22:57.298 slat (usec): min=30, max=193, avg=34.09, stdev= 5.84 00:22:57.298 clat (usec): min=5741, max=18307, avg=10990.51, stdev=1956.99 00:22:57.298 lat (usec): min=5773, max=18339, avg=11024.59, stdev=1957.49 00:22:57.298 clat percentiles (usec): 00:22:57.298 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9372], 00:22:57.298 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:22:57.298 | 70.00th=[11731], 80.00th=[12518], 90.00th=[13829], 95.00th=[14746], 00:22:57.298 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:22:57.298 | 99.99th=[18220] 00:22:57.298 bw ( KiB/s): min=60096, max=79648, per=89.82%, avg=69680.00, stdev=8632.19, samples=4 00:22:57.298 iops : min= 3756, max= 4978, avg=4355.00, stdev=539.51, samples=4 00:22:57.298 lat (msec) : 4=0.23%, 10=54.87%, 20=44.40%, 50=0.37%, 100=0.13% 00:22:57.298 cpu : usr=75.26%, sys=21.75%, ctx=30, majf=0, minf=54 00:22:57.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:57.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:57.298 issued rwts: total=16351,8747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:57.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:57.298 00:22:57.298 Run status group 0 (all jobs): 00:22:57.298 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2009-2009msec 00:22:57.298 WRITE: bw=75.8MiB/s (79.4MB/s), 75.8MiB/s-75.8MiB/s (79.4MB/s-79.4MB/s), io=137MiB (143MB), run=1804-1804msec 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.298 rmmod nvme_tcp 00:22:57.298 rmmod nvme_fabrics 00:22:57.298 rmmod nvme_keyring 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2531502 ']' 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2531502 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2531502 ']' 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2531502 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2531502 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2531502' 00:22:57.298 killing process with pid 2531502 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2531502 00:22:57.298 07:28:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2531502 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.555 07:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.087 00:23:00.087 real 0m12.021s 00:23:00.087 user 0m35.500s 00:23:00.087 sys 0m4.051s 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.087 ************************************ 00:23:00.087 END TEST nvmf_fio_host 00:23:00.087 ************************************ 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.087 ************************************ 00:23:00.087 START TEST nvmf_failover 00:23:00.087 ************************************ 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:00.087 * Looking for test storage... 00:23:00.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.087 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.088 07:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:01.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:01.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:01.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:01.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.464 07:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.722 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.722 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.722 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.722 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:23:01.723 00:23:01.723 --- 10.0.0.2 ping statistics --- 00:23:01.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.723 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:23:01.723 00:23:01.723 --- 10.0.0.1 ping statistics --- 00:23:01.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.723 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2534504 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2534504 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2534504 ']' 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.723 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.723 [2024-07-25 07:28:34.180911] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:01.723 [2024-07-25 07:28:34.180984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.723 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.723 [2024-07-25 07:28:34.245219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.981 [2024-07-25 07:28:34.357052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.981 [2024-07-25 07:28:34.357122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.981 [2024-07-25 07:28:34.357151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.981 [2024-07-25 07:28:34.357162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.981 [2024-07-25 07:28:34.357172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.981 [2024-07-25 07:28:34.360279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.981 [2024-07-25 07:28:34.360365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.981 [2024-07-25 07:28:34.360368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.981 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:02.238 [2024-07-25 07:28:34.760897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.496 07:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:02.754 Malloc0 00:23:02.754 07:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.057 07:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.314 07:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.572 [2024-07-25 07:28:35.880407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.572 07:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:03.829 [2024-07-25 07:28:36.125071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:03.829 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:04.087 [2024-07-25 07:28:36.381850] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2534793 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2534793 /var/tmp/bdevperf.sock 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2534793 ']' 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.087 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:04.345 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.345 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:04.345 07:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.910 NVMe0n1 00:23:04.910 07:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:05.167 00:23:05.167 07:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2534930 00:23:05.167 07:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:05.167 07:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:06.098 07:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.355 07:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:09.637 07:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:09.894 00:23:09.894 07:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:10.151 [2024-07-25 07:28:42.570234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 [2024-07-25 07:28:42.570393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1d60 is same with the state(5) to be set 00:23:10.151 07:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:13.428 07:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.428 [2024-07-25 07:28:45.870043] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.428 07:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:14.800 07:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:14.800 [2024-07-25 07:28:47.173614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.173997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 [2024-07-25 07:28:47.174120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2ad0 is same with the state(5) to be set 00:23:14.800 07:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2534930 00:23:21.391 0 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2534793 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2534793 ']' 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2534793 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2534793 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2534793' 00:23:21.391 killing process with pid 2534793 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2534793 00:23:21.391 07:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2534793 00:23:21.391 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.391 [2024-07-25 07:28:36.447413] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:21.391 [2024-07-25 07:28:36.447493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534793 ] 00:23:21.391 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.391 [2024-07-25 07:28:36.505903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.391 [2024-07-25 07:28:36.613409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.391 Running I/O for 15 seconds... 00:23:21.391 [2024-07-25 07:28:38.862427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.391 [2024-07-25 07:28:38.862494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.862978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.862992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.863005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.391 [2024-07-25 07:28:38.863033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.391 [2024-07-25 07:28:38.863060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.391 [2024-07-25 07:28:38.863088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.863115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.863143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.863171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.391 [2024-07-25 07:28:38.863202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.391 [2024-07-25 07:28:38.863217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.863982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.863996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.392 [2024-07-25 07:28:38.864376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.392 [2024-07-25 07:28:38.864391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.864978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.864992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.393 [2024-07-25 07:28:38.865374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.393 [2024-07-25 07:28:38.865403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.393 [2024-07-25 07:28:38.865431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.393 [2024-07-25 07:28:38.865464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.393 [2024-07-25 07:28:38.865492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.393 [2024-07-25 07:28:38.865520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.393 [2024-07-25 07:28:38.865552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.394 [2024-07-25 07:28:38.865565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.394 [2024-07-25 07:28:38.865593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.865983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:38.866266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbe9e0 is same with the state(5) to be set 00:23:21.394 [2024-07-25 07:28:38.866298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:21.394 [2024-07-25 07:28:38.866310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.394 [2024-07-25 07:28:38.866322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:23:21.394 [2024-07-25 07:28:38.866334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866401] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdbe9e0 was disconnected and freed. reset controller. 00:23:21.394 [2024-07-25 07:28:38.866419] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:21.394 [2024-07-25 07:28:38.866454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.394 [2024-07-25 07:28:38.866472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.394 [2024-07-25 07:28:38.866501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.394 [2024-07-25 07:28:38.866527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.394 [2024-07-25 07:28:38.866554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:38.866578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.394 [2024-07-25 07:28:38.866639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f1a0 (9): Bad file descriptor 00:23:21.394 [2024-07-25 07:28:38.869891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.394 [2024-07-25 07:28:39.037649] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:21.394 [2024-07-25 07:28:42.570577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:42.570618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:42.570647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:42.570673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:42.570691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:42.570705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:42.570720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:42.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:42.570750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.394 [2024-07-25 07:28:42.570764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.394 [2024-07-25 07:28:42.570778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.570980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.570995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.395 [2024-07-25 07:28:42.571720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.395 [2024-07-25 07:28:42.571732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.571983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.396 [2024-07-25 07:28:42.572835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.396 [2024-07-25 07:28:42.572849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.396 [2024-07-25 07:28:42.572863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.572894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.572910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.572924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.572939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.572952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.572967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.572980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.572995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.397 [2024-07-25 07:28:42.573978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.397 [2024-07-25 07:28:42.573995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.398 [2024-07-25 07:28:42.574024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.398 [2024-07-25 07:28:42.574053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.398 [2024-07-25 07:28:42.574082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.398 [2024-07-25 07:28:42.574111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.398 [2024-07-25 07:28:42.574140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.398 [2024-07-25 07:28:42.574169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:42.574199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:42.574232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:42.574269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:42.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:42.574327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:42.574355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:21.398 [2024-07-25 07:28:42.574405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.398 [2024-07-25 07:28:42.574418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106184 len:8 PRP1 0x0 PRP2 0x0 00:23:21.398 [2024-07-25 07:28:42.574431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574492] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdc0950 was disconnected and freed. reset controller. 00:23:21.398 [2024-07-25 07:28:42.574511] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:21.398 [2024-07-25 07:28:42.574557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.398 [2024-07-25 07:28:42.574575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.398 [2024-07-25 07:28:42.574605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.398 [2024-07-25 07:28:42.574632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.398 [2024-07-25 07:28:42.574659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:42.574672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.398 [2024-07-25 07:28:42.577905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.398 [2024-07-25 07:28:42.577944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f1a0 (9): Bad file descriptor 00:23:21.398 [2024-07-25 07:28:42.699425] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:21.398 [2024-07-25 07:28:47.174671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.174976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.174990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.398 [2024-07-25 07:28:47.175231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.398 [2024-07-25 07:28:47.175268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.399 [2024-07-25 07:28:47.175778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.175977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.175991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.399 [2024-07-25 07:28:47.176436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.399 [2024-07-25 07:28:47.176451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.176974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.176987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.400 [2024-07-25 07:28:47.177192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.400 [2024-07-25 07:28:47.177403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.400 [2024-07-25 07:28:47.177418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.401 [2024-07-25 07:28:47.177431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.177984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.177999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.401 [2024-07-25 07:28:47.178366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.401 [2024-07-25 07:28:47.178415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56280 len:8 PRP1 0x0 PRP2 0x0 00:23:21.401 [2024-07-25 07:28:47.178429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:21.401 [2024-07-25 07:28:47.178459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.401 [2024-07-25 07:28:47.178471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56288 len:8 PRP1 0x0 PRP2 0x0 00:23:21.401 [2024-07-25 07:28:47.178484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:21.401 [2024-07-25 07:28:47.178507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.401 [2024-07-25 07:28:47.178518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56296 len:8 PRP1 0x0 PRP2 0x0 00:23:21.401 [2024-07-25 07:28:47.178531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:21.401 [2024-07-25 07:28:47.178554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:21.401 [2024-07-25 07:28:47.178565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56304 len:8 PRP1 0x0 PRP2 0x0 00:23:21.401 [2024-07-25 07:28:47.178577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.401 [2024-07-25 07:28:47.178637] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdcf830 was disconnected and freed. reset controller. 00:23:21.402 [2024-07-25 07:28:47.178659] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:21.402 [2024-07-25 07:28:47.178694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.402 [2024-07-25 07:28:47.178712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.402 [2024-07-25 07:28:47.178728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.402 [2024-07-25 07:28:47.178742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.402 [2024-07-25 07:28:47.178756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.402 [2024-07-25 07:28:47.178769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.402 [2024-07-25 07:28:47.178782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.402 [2024-07-25 07:28:47.178795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.402 [2024-07-25 07:28:47.178808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.402 [2024-07-25 07:28:47.178862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f1a0 (9): Bad file descriptor 00:23:21.402 [2024-07-25 07:28:47.182073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.402 [2024-07-25 07:28:47.219985] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:21.402 00:23:21.402 Latency(us) 00:23:21.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.402 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:21.402 Verification LBA range: start 0x0 length 0x4000 00:23:21.402 NVMe0n1 : 15.04 8431.20 32.93 724.57 0.00 13916.59 831.34 44661.57 00:23:21.402 =================================================================================================================== 00:23:21.402 Total : 8431.20 32.93 724.57 0.00 13916.59 831.34 44661.57 00:23:21.402 Received shutdown signal, test time was about 15.000000 seconds 00:23:21.402 00:23:21.402 Latency(us) 00:23:21.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.402 =================================================================================================================== 00:23:21.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2536665 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2536665 /var/tmp/bdevperf.sock 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2536665 ']' 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.402 [2024-07-25 07:28:53.617428] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:21.402 [2024-07-25 07:28:53.858040] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:21.402 07:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.966 NVMe0n1 00:23:21.966 07:28:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:22.223 00:23:22.223 07:28:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:22.787 00:23:22.787 07:28:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.787 07:28:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:23.045 07:28:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.302 07:28:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:26.581 07:28:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.581 07:28:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:26.581 07:28:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2537444 00:23:26.581 07:28:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.581 07:28:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2537444 00:23:27.513 0 00:23:27.514 07:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.514 [2024-07-25 07:28:53.115734] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:27.514 [2024-07-25 07:28:53.115824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536665 ] 00:23:27.514 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.514 [2024-07-25 07:28:53.180262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.514 [2024-07-25 07:28:53.288579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.514 [2024-07-25 07:28:55.558397] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:27.514 [2024-07-25 07:28:55.558489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.514 [2024-07-25 07:28:55.558513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.514 [2024-07-25 07:28:55.558530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.514 [2024-07-25 07:28:55.558544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.514 [2024-07-25 07:28:55.558558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.514 [2024-07-25 07:28:55.558572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.514 [2024-07-25 07:28:55.558586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.514 [2024-07-25 07:28:55.558600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.514 [2024-07-25 07:28:55.558613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:27.514 [2024-07-25 07:28:55.558661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:27.514 [2024-07-25 07:28:55.558693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5681a0 (9): Bad file descriptor 00:23:27.514 [2024-07-25 07:28:55.650419] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:27.514 Running I/O for 1 seconds... 00:23:27.514 00:23:27.514 Latency(us) 00:23:27.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:27.514 Verification LBA range: start 0x0 length 0x4000 00:23:27.514 NVMe0n1 : 1.01 8577.88 33.51 0.00 0.00 14861.89 3252.53 15922.82 00:23:27.514 =================================================================================================================== 00:23:27.514 Total : 8577.88 33.51 0.00 0.00 14861.89 3252.53 15922.82 00:23:27.514 07:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.514 07:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:27.771 07:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:28.029 07:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.029 07:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:28.286 07:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:28.544 07:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:31.825 07:29:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.825 07:29:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2536665 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2536665 ']' 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2536665 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2536665 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2536665' 00:23:31.825 killing process with pid 2536665 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2536665 00:23:31.825 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2536665 00:23:32.082 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:32.082 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.339 rmmod nvme_tcp 00:23:32.339 rmmod nvme_fabrics 00:23:32.339 rmmod nvme_keyring 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2534504 ']' 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2534504 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2534504 ']' 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2534504 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.339 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2534504 00:23:32.596 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:32.596 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:32.596 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2534504' 00:23:32.596 killing process with pid 2534504 00:23:32.596 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2534504 00:23:32.596 07:29:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2534504 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.855 07:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:34.753 00:23:34.753 real 0m35.129s 00:23:34.753 user 2m4.099s 00:23:34.753 sys 0m5.903s 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:34.753 ************************************ 00:23:34.753 END TEST nvmf_failover 00:23:34.753 ************************************ 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.753 ************************************ 00:23:34.753 START TEST nvmf_host_discovery 00:23:34.753 ************************************ 00:23:34.753 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:35.012 * Looking for test storage... 00:23:35.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.012 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.013 07:29:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:36.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:36.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:36.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:36.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.914 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:23:36.915 00:23:36.915 --- 10.0.0.2 ping statistics --- 00:23:36.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.915 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:23:36.915 00:23:36.915 --- 10.0.0.1 ping statistics --- 00:23:36.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.915 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2540046 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2540046 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2540046 ']' 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.915 07:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.915 [2024-07-25 07:29:09.388857] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:36.915 [2024-07-25 07:29:09.388941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.915 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.212 [2024-07-25 07:29:09.458759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.212 [2024-07-25 07:29:09.573258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.212 [2024-07-25 07:29:09.573324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.212 [2024-07-25 07:29:09.573340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.212 [2024-07-25 07:29:09.573353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.212 [2024-07-25 07:29:09.573364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.212 [2024-07-25 07:29:09.573395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 [2024-07-25 07:29:10.348942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 [2024-07-25 07:29:10.357117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 null0 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 null1 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.144 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2540203 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2540203 /tmp/host.sock 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2540203 ']' 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:38.145 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.145 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.145 [2024-07-25 07:29:10.429940] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:38.145 [2024-07-25 07:29:10.430019] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540203 ] 00:23:38.145 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.145 [2024-07-25 07:29:10.490783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.145 [2024-07-25 07:29:10.605803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.403 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.661 07:29:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 [2024-07-25 07:29:10.998894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.661 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:38.662 07:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:39.592 [2024-07-25 07:29:11.787447] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:39.592 [2024-07-25 07:29:11.787472] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:39.592 [2024-07-25 07:29:11.787497] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.592 [2024-07-25 07:29:11.873784] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:39.592 [2024-07-25 07:29:11.938629] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:39.592 [2024-07-25 07:29:11.938656] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:39.871 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.872 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.129 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.129 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.130 [2024-07-25 07:29:12.639697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.130 [2024-07-25 07:29:12.640468] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:40.130 [2024-07-25 07:29:12.640512] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.130 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.387 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.387 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:40.387 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.388 [2024-07-25 07:29:12.767417] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:40.388 07:29:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:40.645 [2024-07-25 07:29:13.035741] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:40.646 [2024-07-25 07:29:13.035771] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:40.646 [2024-07-25 07:29:13.035782] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.579 [2024-07-25 07:29:13.880353] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.579 [2024-07-25 07:29:13.880385] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.579 [2024-07-25 07:29:13.880705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.579 [2024-07-25 07:29:13.880739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.579 [2024-07-25 07:29:13.880758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.579 [2024-07-25 07:29:13.880773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.579 [2024-07-25 07:29:13.880789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.579 [2024-07-25 07:29:13.880804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.579 [2024-07-25 07:29:13.880819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.579 [2024-07-25 07:29:13.880834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.579 [2024-07-25 07:29:13.880849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.579 [2024-07-25 07:29:13.890702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.579 [2024-07-25 07:29:13.900748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.579 [2024-07-25 07:29:13.900999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.579 [2024-07-25 07:29:13.901030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.579 [2024-07-25 07:29:13.901046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.579 [2024-07-25 07:29:13.901070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.579 [2024-07-25 07:29:13.901090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.579 [2024-07-25 07:29:13.901105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.579 [2024-07-25 07:29:13.901119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.579 [2024-07-25 07:29:13.901140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.579 [2024-07-25 07:29:13.910833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.579 [2024-07-25 07:29:13.911039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.579 [2024-07-25 07:29:13.911070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.579 [2024-07-25 07:29:13.911088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.579 [2024-07-25 07:29:13.911112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.579 [2024-07-25 07:29:13.911134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.579 [2024-07-25 07:29:13.911150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.579 [2024-07-25 07:29:13.911164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.579 [2024-07-25 07:29:13.911185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.579 [2024-07-25 07:29:13.920909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.579 [2024-07-25 07:29:13.921137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.579 [2024-07-25 07:29:13.921164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.579 [2024-07-25 07:29:13.921180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.579 [2024-07-25 07:29:13.921202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.579 [2024-07-25 07:29:13.921228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.579 [2024-07-25 07:29:13.921251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.579 [2024-07-25 07:29:13.921267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.579 [2024-07-25 07:29:13.921286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.579 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.580 [2024-07-25 07:29:13.930989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.580 [2024-07-25 07:29:13.931189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.580 [2024-07-25 07:29:13.931220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.580 [2024-07-25 07:29:13.931238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.580 [2024-07-25 07:29:13.931273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.580 [2024-07-25 07:29:13.931322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.580 [2024-07-25 07:29:13.931340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.580 [2024-07-25 07:29:13.931354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.580 [2024-07-25 07:29:13.931372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.580 [2024-07-25 07:29:13.941071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.580 [2024-07-25 07:29:13.941259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.580 [2024-07-25 07:29:13.941306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.580 [2024-07-25 07:29:13.941322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.580 [2024-07-25 07:29:13.941344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.580 [2024-07-25 07:29:13.941376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.580 [2024-07-25 07:29:13.941398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.580 [2024-07-25 07:29:13.941412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.580 [2024-07-25 07:29:13.941431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.580 [2024-07-25 07:29:13.951149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.580 [2024-07-25 07:29:13.951360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.580 [2024-07-25 07:29:13.951387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.580 [2024-07-25 07:29:13.951403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.580 [2024-07-25 07:29:13.951425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.580 [2024-07-25 07:29:13.951468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.580 [2024-07-25 07:29:13.951487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.580 [2024-07-25 07:29:13.951500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.580 [2024-07-25 07:29:13.951519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.580 [2024-07-25 07:29:13.961250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.580 [2024-07-25 07:29:13.961441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.580 [2024-07-25 07:29:13.961469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdcc80 with addr=10.0.0.2, port=4420 00:23:41.580 [2024-07-25 07:29:13.961484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdcc80 is same with the state(5) to be set 00:23:41.580 [2024-07-25 07:29:13.961506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdcc80 (9): Bad file descriptor 00:23:41.580 [2024-07-25 07:29:13.961537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.580 [2024-07-25 07:29:13.961554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.580 [2024-07-25 07:29:13.961567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.580 [2024-07-25 07:29:13.961586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.580 [2024-07-25 07:29:13.966549] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:41.580 [2024-07-25 07:29:13.966580] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.580 07:29:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.580 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.581 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.839 07:29:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.771 [2024-07-25 07:29:15.237422] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:42.771 [2024-07-25 07:29:15.237446] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:42.771 [2024-07-25 07:29:15.237468] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:43.028 [2024-07-25 07:29:15.324786] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:43.286 [2024-07-25 07:29:15.635894] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:43.286 [2024-07-25 07:29:15.635934] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.286 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.286 request: 00:23:43.286 { 00:23:43.286 "name": "nvme", 00:23:43.286 "trtype": "tcp", 00:23:43.286 "traddr": "10.0.0.2", 00:23:43.286 "adrfam": "ipv4", 00:23:43.286 "trsvcid": "8009", 00:23:43.286 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:43.286 "wait_for_attach": true, 00:23:43.286 "method": "bdev_nvme_start_discovery", 00:23:43.286 "req_id": 1 00:23:43.287 } 00:23:43.287 Got JSON-RPC error response 00:23:43.287 response: 00:23:43.287 { 00:23:43.287 "code": -17, 00:23:43.287 "message": "File exists" 00:23:43.287 } 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.287 request: 00:23:43.287 { 00:23:43.287 "name": "nvme_second", 00:23:43.287 "trtype": "tcp", 00:23:43.287 "traddr": "10.0.0.2", 00:23:43.287 "adrfam": "ipv4", 00:23:43.287 "trsvcid": "8009", 00:23:43.287 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:43.287 "wait_for_attach": true, 00:23:43.287 "method": "bdev_nvme_start_discovery", 00:23:43.287 "req_id": 1 00:23:43.287 } 00:23:43.287 Got JSON-RPC error response 00:23:43.287 response: 00:23:43.287 { 00:23:43.287 "code": -17, 00:23:43.287 "message": "File exists" 00:23:43.287 } 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.287 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.544 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.544 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.545 07:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.476 [2024-07-25 07:29:16.831351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-25 07:29:16.831394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdfc10 with addr=10.0.0.2, port=8010 00:23:44.476 [2024-07-25 07:29:16.831417] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:44.476 [2024-07-25 07:29:16.831432] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:44.476 [2024-07-25 07:29:16.831444] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:45.408 [2024-07-25 07:29:17.833752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.408 [2024-07-25 07:29:17.833785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdfc10 with addr=10.0.0.2, port=8010 00:23:45.408 [2024-07-25 07:29:17.833806] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:45.408 [2024-07-25 07:29:17.833825] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:45.408 [2024-07-25 07:29:17.833838] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:46.340 [2024-07-25 07:29:18.835995] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:46.340 request: 00:23:46.340 { 00:23:46.340 "name": "nvme_second", 00:23:46.340 "trtype": "tcp", 00:23:46.340 "traddr": "10.0.0.2", 00:23:46.340 "adrfam": "ipv4", 00:23:46.340 "trsvcid": "8010", 00:23:46.340 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:46.340 "wait_for_attach": false, 00:23:46.340 "attach_timeout_ms": 3000, 00:23:46.340 "method": "bdev_nvme_start_discovery", 00:23:46.340 "req_id": 1 00:23:46.340 } 00:23:46.340 Got JSON-RPC error response 00:23:46.340 response: 00:23:46.340 { 00:23:46.340 "code": -110, 00:23:46.340 "message": "Connection timed out" 00:23:46.340 } 00:23:46.340 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:46.340 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:46.340 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.340 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.340 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:46.341 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2540203 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.598 rmmod nvme_tcp 00:23:46.598 rmmod nvme_fabrics 00:23:46.598 rmmod nvme_keyring 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2540046 ']' 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2540046 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2540046 ']' 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2540046 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:46.598 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.599 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2540046 00:23:46.599 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:46.599 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:46.599 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2540046' 00:23:46.599 killing process with pid 2540046 00:23:46.599 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2540046 00:23:46.599 07:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2540046 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.857 07:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.387 00:23:49.387 real 0m14.031s 00:23:49.387 user 0m20.403s 00:23:49.387 sys 0m2.821s 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.387 ************************************ 00:23:49.387 END TEST nvmf_host_discovery 00:23:49.387 ************************************ 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.387 ************************************ 00:23:49.387 START TEST nvmf_host_multipath_status 00:23:49.387 ************************************ 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:49.387 * Looking for test storage... 00:23:49.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.387 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.388 07:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:50.762 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:50.762 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.762 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:50.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:50.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.763 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.020 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.020 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.020 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.020 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:23:51.021 00:23:51.021 --- 10.0.0.2 ping statistics --- 00:23:51.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.021 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:23:51.021 00:23:51.021 --- 10.0.0.1 ping statistics --- 00:23:51.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.021 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2543234 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2543234 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2543234 ']' 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.021 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.021 [2024-07-25 07:29:23.483010] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:23:51.021 [2024-07-25 07:29:23.483093] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.021 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.021 [2024-07-25 07:29:23.545740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:51.279 [2024-07-25 07:29:23.656709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.279 [2024-07-25 07:29:23.656774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.279 [2024-07-25 07:29:23.656787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.279 [2024-07-25 07:29:23.656798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.279 [2024-07-25 07:29:23.656807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.279 [2024-07-25 07:29:23.656931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.279 [2024-07-25 07:29:23.656936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2543234 00:23:51.279 07:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:51.536 [2024-07-25 07:29:24.052897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.795 07:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:52.101 Malloc0 00:23:52.101 07:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:52.359 07:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.616 07:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.616 [2024-07-25 07:29:25.142361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:52.888 [2024-07-25 07:29:25.386973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2543523 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2543523 /var/tmp/bdevperf.sock 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2543523 ']' 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.888 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:53.454 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.454 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:53.454 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:53.711 07:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:53.968 Nvme0n1 00:23:53.968 07:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:54.533 Nvme0n1 00:23:54.533 07:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:54.533 07:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.432 07:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:56.432 07:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:56.690 07:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:56.947 07:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.318 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.576 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.576 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.576 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.576 07:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.833 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.833 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.833 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.833 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.091 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.091 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.091 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.091 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.349 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.349 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.349 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.349 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.607 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.607 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:59.607 07:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:59.865 07:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.123 07:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:01.055 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:01.056 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:01.056 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.056 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.312 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.312 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:01.312 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.312 07:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.569 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.569 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.569 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.569 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.826 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.826 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.826 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.826 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.083 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.083 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.083 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.083 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:02.341 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.341 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:02.341 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.341 07:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.599 07:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.599 07:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:02.599 07:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.856 07:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:03.114 07:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:04.046 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:04.046 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:04.046 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.046 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.304 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.304 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:04.304 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.305 07:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.562 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.562 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.562 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.562 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.821 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.821 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.821 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.821 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.079 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.079 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:05.079 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.079 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.336 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.336 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:05.336 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.336 07:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.594 07:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.594 07:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:05.594 07:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.850 07:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:06.143 07:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:07.097 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:07.098 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.098 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.098 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.355 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.355 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:07.355 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.355 07:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:07.612 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:07.612 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:07.612 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.612 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.869 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.869 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.869 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.869 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.126 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.126 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.126 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.126 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.383 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.383 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:08.383 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.383 07:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.640 07:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.640 07:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:08.640 07:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:08.896 07:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:09.153 07:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:10.083 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:10.083 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.083 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.083 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.340 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.340 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:10.340 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.340 07:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.597 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.597 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.597 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.597 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:10.855 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.855 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:10.855 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.855 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.111 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.111 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:11.111 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.111 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.367 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.367 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.367 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.367 07:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.624 07:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.624 07:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:11.624 07:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:11.881 07:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.138 07:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:13.070 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:13.070 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:13.070 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.070 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.327 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.327 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.327 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.327 07:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.584 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.584 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.584 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.584 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.842 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.842 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.842 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.842 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.099 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.099 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:14.099 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.099 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.357 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.357 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.357 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.357 07:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.615 07:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.615 07:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:14.872 07:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:14.872 07:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:15.129 07:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.387 07:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:16.321 07:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:16.321 07:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.321 07:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.321 07:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.579 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.579 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.579 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.579 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.837 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.837 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.837 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.837 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.094 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.094 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.094 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.094 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.352 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.352 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.352 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.352 07:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.610 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.610 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.610 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.610 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.867 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.867 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:17.867 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.125 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.382 07:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:19.790 07:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:19.790 07:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:19.790 07:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.790 07:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.790 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.790 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.790 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.790 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.067 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.067 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.068 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.068 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.325 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.325 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.325 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.325 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.581 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.581 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.581 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.582 07:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:20.839 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.096 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:21.354 07:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:22.727 07:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:22.727 07:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.727 07:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.727 07:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.727 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.727 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.727 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.727 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.984 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.984 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.984 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.984 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.242 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.242 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.242 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.242 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.500 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.500 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.500 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.500 07:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.757 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.757 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.757 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.757 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.015 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.015 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:24.015 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.273 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:24.530 07:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:25.463 07:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:25.463 07:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:25.463 07:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.463 07:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.721 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.721 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.721 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.721 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.979 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.979 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.979 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.979 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.236 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.236 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.237 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.237 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.494 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.494 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:26.494 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.494 07:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.751 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.751 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:26.751 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.751 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2543523 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2543523 ']' 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2543523 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2543523 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2543523' 00:24:27.009 killing process with pid 2543523 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2543523 00:24:27.009 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2543523 00:24:27.269 Connection closed with partial response: 00:24:27.269 00:24:27.269 00:24:27.269 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2543523 00:24:27.269 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.269 [2024-07-25 07:29:25.445672] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:27.269 [2024-07-25 07:29:25.445753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543523 ] 00:24:27.269 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.269 [2024-07-25 07:29:25.504407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.269 [2024-07-25 07:29:25.618347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.269 Running I/O for 90 seconds... 00:24:27.269 [2024-07-25 07:29:41.317926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.317994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.318381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.318397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.269 [2024-07-25 07:29:41.319780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.269 [2024-07-25 07:29:41.319803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.319819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.319842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.319858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.319886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.319902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.319925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.319941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.319964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.319980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.320983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.321424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.321466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.321970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.321986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.322028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.322070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.322116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.270 [2024-07-25 07:29:41.322158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.322200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.322249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.322295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.322337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.322380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.270 [2024-07-25 07:29:41.322421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.270 [2024-07-25 07:29:41.322447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.271 [2024-07-25 07:29:41.322463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.322966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.322999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.323962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.323991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:41.324478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:41.324496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.919316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.919380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.919467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.919488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.919512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.919529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.919568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.919584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.922917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.922943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.922987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.271 [2024-07-25 07:29:56.923488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.271 [2024-07-25 07:29:56.923504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.272 [2024-07-25 07:29:56.923816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.272 [2024-07-25 07:29:56.923854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.923969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.923992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.924008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.924030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.924047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.924068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.924085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.924107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.924124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.272 [2024-07-25 07:29:56.924146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.272 [2024-07-25 07:29:56.924163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.272 Received shutdown signal, test time was about 32.385400 seconds 00:24:27.272 00:24:27.272 Latency(us) 00:24:27.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.272 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:27.272 Verification LBA range: start 0x0 length 0x4000 00:24:27.272 Nvme0n1 : 32.38 7893.42 30.83 0.00 0.00 16189.55 561.30 4026531.84 00:24:27.272 =================================================================================================================== 00:24:27.272 Total : 7893.42 30.83 0.00 0.00 16189.55 561.30 4026531.84 00:24:27.272 07:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.530 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.530 rmmod nvme_tcp 00:24:27.788 rmmod nvme_fabrics 00:24:27.788 rmmod nvme_keyring 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2543234 ']' 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2543234 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2543234 ']' 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2543234 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2543234 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2543234' 00:24:27.788 killing process with pid 2543234 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2543234 00:24:27.788 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2543234 00:24:28.046 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.046 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.046 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.046 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.046 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.047 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.047 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.047 07:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.946 07:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.946 00:24:29.946 real 0m41.134s 00:24:29.946 user 2m4.363s 00:24:29.946 sys 0m10.388s 00:24:29.946 07:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:30.204 ************************************ 00:24:30.204 END TEST nvmf_host_multipath_status 00:24:30.204 ************************************ 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.204 ************************************ 00:24:30.204 START TEST nvmf_discovery_remove_ifc 00:24:30.204 ************************************ 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:30.204 * Looking for test storage... 00:24:30.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.204 07:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:32.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:32.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:32.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:32.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.732 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:24:32.733 00:24:32.733 --- 10.0.0.2 ping statistics --- 00:24:32.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.733 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:32.733 00:24:32.733 --- 10.0.0.1 ping statistics --- 00:24:32.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.733 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2549838 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2549838 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2549838 ']' 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.733 07:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.733 [2024-07-25 07:30:04.855679] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:32.733 [2024-07-25 07:30:04.855750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.733 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.733 [2024-07-25 07:30:04.917138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.733 [2024-07-25 07:30:05.022965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.733 [2024-07-25 07:30:05.023031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.733 [2024-07-25 07:30:05.023044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.733 [2024-07-25 07:30:05.023056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.733 [2024-07-25 07:30:05.023079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.733 [2024-07-25 07:30:05.023109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.733 [2024-07-25 07:30:05.172345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.733 [2024-07-25 07:30:05.180573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:32.733 null0 00:24:32.733 [2024-07-25 07:30:05.212482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2549972 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2549972 /tmp/host.sock 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2549972 ']' 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:32.733 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.733 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.992 [2024-07-25 07:30:05.281909] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:24:32.992 [2024-07-25 07:30:05.281986] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549972 ] 00:24:32.992 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.992 [2024-07-25 07:30:05.340822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.992 [2024-07-25 07:30:05.456991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.992 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.250 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.250 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:33.250 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.250 07:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.183 [2024-07-25 07:30:06.661405] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:34.183 [2024-07-25 07:30:06.661443] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:34.183 [2024-07-25 07:30:06.661468] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:34.470 [2024-07-25 07:30:06.748725] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:34.470 [2024-07-25 07:30:06.852445] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:34.470 [2024-07-25 07:30:06.852509] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:34.470 [2024-07-25 07:30:06.852567] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:34.470 [2024-07-25 07:30:06.852592] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:34.470 [2024-07-25 07:30:06.852626] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.470 [2024-07-25 07:30:06.858522] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcd3860 was disconnected and freed. delete nvme_qpair. 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.470 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.732 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.732 07:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.664 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.664 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.664 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.664 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.664 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.665 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.665 07:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.665 07:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.665 07:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.665 07:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.597 07:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.967 07:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.899 07:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.831 07:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.831 [2024-07-25 07:30:12.293589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:39.831 [2024-07-25 07:30:12.293678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.831 [2024-07-25 07:30:12.293703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.831 [2024-07-25 07:30:12.293724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.831 [2024-07-25 07:30:12.293739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.831 [2024-07-25 07:30:12.293754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.831 [2024-07-25 07:30:12.293769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.831 [2024-07-25 07:30:12.293785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.831 [2024-07-25 07:30:12.293799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.831 [2024-07-25 07:30:12.293815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.831 [2024-07-25 07:30:12.293829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.831 [2024-07-25 07:30:12.293844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a3c0 is same with the state(5) to be set 00:24:39.831 [2024-07-25 07:30:12.303609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a3c0 (9): Bad file descriptor 00:24:39.831 [2024-07-25 07:30:12.313660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.765 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.023 [2024-07-25 07:30:13.342269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:41.023 [2024-07-25 07:30:13.342325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9a3c0 with addr=10.0.0.2, port=4420 00:24:41.023 [2024-07-25 07:30:13.342359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a3c0 is same with the state(5) to be set 00:24:41.023 [2024-07-25 07:30:13.342388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a3c0 (9): Bad file descriptor 00:24:41.023 [2024-07-25 07:30:13.342788] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.023 [2024-07-25 07:30:13.342832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:41.023 [2024-07-25 07:30:13.342859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:41.023 [2024-07-25 07:30:13.342878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:41.023 [2024-07-25 07:30:13.342903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.023 [2024-07-25 07:30:13.342924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:41.023 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.023 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.023 07:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.956 [2024-07-25 07:30:14.345434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:41.956 [2024-07-25 07:30:14.345513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:41.956 [2024-07-25 07:30:14.345530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:41.956 [2024-07-25 07:30:14.345547] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:41.956 [2024-07-25 07:30:14.345592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.956 [2024-07-25 07:30:14.345641] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:41.956 [2024-07-25 07:30:14.345712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.956 [2024-07-25 07:30:14.345735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.956 [2024-07-25 07:30:14.345756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.956 [2024-07-25 07:30:14.345769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.956 [2024-07-25 07:30:14.345783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.956 [2024-07-25 07:30:14.345796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.956 [2024-07-25 07:30:14.345810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.956 [2024-07-25 07:30:14.345823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.956 [2024-07-25 07:30:14.345837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.956 [2024-07-25 07:30:14.345849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.956 [2024-07-25 07:30:14.345864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:41.956 [2024-07-25 07:30:14.345972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc99820 (9): Bad file descriptor 00:24:41.956 [2024-07-25 07:30:14.347008] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:41.956 [2024-07-25 07:30:14.347030] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.956 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.214 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:42.214 07:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:43.148 07:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.082 [2024-07-25 07:30:16.407147] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.082 [2024-07-25 07:30:16.407188] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.082 [2024-07-25 07:30:16.407213] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.082 [2024-07-25 07:30:16.534648] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:44.082 07:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.082 [2024-07-25 07:30:16.596833] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:44.082 [2024-07-25 07:30:16.596890] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:44.082 [2024-07-25 07:30:16.596929] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:44.082 [2024-07-25 07:30:16.596957] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:44.082 [2024-07-25 07:30:16.596972] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.082 [2024-07-25 07:30:16.604695] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcdcbb0 was disconnected and freed. delete nvme_qpair. 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2549972 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2549972 ']' 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2549972 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2549972 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2549972' 00:24:45.456 killing process with pid 2549972 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2549972 00:24:45.456 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2549972 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.457 rmmod nvme_tcp 00:24:45.457 rmmod nvme_fabrics 00:24:45.457 rmmod nvme_keyring 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2549838 ']' 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2549838 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2549838 ']' 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2549838 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.457 07:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2549838 00:24:45.715 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:45.715 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:45.715 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2549838' 00:24:45.715 killing process with pid 2549838 00:24:45.715 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2549838 00:24:45.715 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2549838 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.973 07:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.873 00:24:47.873 real 0m17.795s 00:24:47.873 user 0m25.748s 00:24:47.873 sys 0m3.029s 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.873 ************************************ 00:24:47.873 END TEST nvmf_discovery_remove_ifc 00:24:47.873 ************************************ 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.873 ************************************ 00:24:47.873 START TEST nvmf_identify_kernel_target 00:24:47.873 ************************************ 00:24:47.873 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:48.131 * Looking for test storage... 00:24:48.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.131 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.132 07:30:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:50.034 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:50.034 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:50.034 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:50.034 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.034 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.035 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:24:50.293 00:24:50.293 --- 10.0.0.2 ping statistics --- 00:24:50.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.293 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:24:50.293 00:24:50.293 --- 10.0.0.1 ping statistics --- 00:24:50.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.293 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:50.293 07:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:51.261 Waiting for block devices as requested 00:24:51.261 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:51.519 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:51.519 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:51.778 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:51.778 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:51.778 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:51.778 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:52.037 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:52.037 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:52.037 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:52.037 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:52.294 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:52.294 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:52.294 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:52.294 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:52.562 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:52.562 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:52.562 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:52.821 No valid GPT data, bailing 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:52.821 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:52.821 00:24:52.821 Discovery Log Number of Records 2, Generation counter 2 00:24:52.821 =====Discovery Log Entry 0====== 00:24:52.821 trtype: tcp 00:24:52.821 adrfam: ipv4 00:24:52.821 subtype: current discovery subsystem 00:24:52.821 treq: not specified, sq flow control disable supported 00:24:52.821 portid: 1 00:24:52.821 trsvcid: 4420 00:24:52.821 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:52.821 traddr: 10.0.0.1 00:24:52.821 eflags: none 00:24:52.821 sectype: none 00:24:52.821 =====Discovery Log Entry 1====== 00:24:52.821 trtype: tcp 00:24:52.821 adrfam: ipv4 00:24:52.821 subtype: nvme subsystem 00:24:52.821 treq: not specified, sq flow control disable supported 00:24:52.821 portid: 1 00:24:52.821 trsvcid: 4420 00:24:52.821 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:52.821 traddr: 10.0.0.1 00:24:52.821 eflags: none 00:24:52.821 sectype: none 00:24:52.822 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:52.822 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:52.822 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.822 ===================================================== 00:24:52.822 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:52.822 ===================================================== 00:24:52.822 Controller Capabilities/Features 00:24:52.822 ================================ 00:24:52.822 Vendor ID: 0000 00:24:52.822 Subsystem Vendor ID: 0000 00:24:52.822 Serial Number: f1a11a8718e607a1e830 00:24:52.822 Model Number: Linux 00:24:52.822 Firmware Version: 6.7.0-68 00:24:52.822 Recommended Arb Burst: 0 00:24:52.822 IEEE OUI Identifier: 00 00 00 00:24:52.822 Multi-path I/O 00:24:52.822 May have multiple subsystem ports: No 00:24:52.822 May have multiple controllers: No 00:24:52.822 Associated with SR-IOV VF: No 00:24:52.822 Max Data Transfer Size: Unlimited 00:24:52.822 Max Number of Namespaces: 0 00:24:52.822 Max Number of I/O Queues: 1024 00:24:52.822 NVMe Specification Version (VS): 1.3 00:24:52.822 NVMe Specification Version (Identify): 1.3 00:24:52.822 Maximum Queue Entries: 1024 00:24:52.822 Contiguous Queues Required: No 00:24:52.822 Arbitration Mechanisms Supported 00:24:52.822 Weighted Round Robin: Not Supported 00:24:52.822 Vendor Specific: Not Supported 00:24:52.822 Reset Timeout: 7500 ms 00:24:52.822 Doorbell Stride: 4 bytes 00:24:52.822 NVM Subsystem Reset: Not Supported 00:24:52.822 Command Sets Supported 00:24:52.822 NVM Command Set: Supported 00:24:52.822 Boot Partition: Not Supported 00:24:52.822 Memory Page Size Minimum: 4096 bytes 00:24:52.822 Memory Page Size Maximum: 4096 bytes 00:24:52.822 Persistent Memory Region: Not Supported 00:24:52.822 Optional Asynchronous Events Supported 00:24:52.822 Namespace Attribute Notices: Not Supported 00:24:52.822 Firmware Activation Notices: Not Supported 00:24:52.822 ANA Change Notices: Not Supported 00:24:52.822 PLE Aggregate Log Change Notices: Not Supported 00:24:52.822 LBA Status Info Alert Notices: Not Supported 00:24:52.822 EGE Aggregate Log Change Notices: Not Supported 00:24:52.822 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.822 Zone Descriptor Change Notices: Not Supported 00:24:52.822 Discovery Log Change Notices: Supported 00:24:52.822 Controller Attributes 00:24:52.822 128-bit Host Identifier: Not Supported 00:24:52.822 Non-Operational Permissive Mode: Not Supported 00:24:52.822 NVM Sets: Not Supported 00:24:52.822 Read Recovery Levels: Not Supported 00:24:52.822 Endurance Groups: Not Supported 00:24:52.822 Predictable Latency Mode: Not Supported 00:24:52.822 Traffic Based Keep ALive: Not Supported 00:24:52.822 Namespace Granularity: Not Supported 00:24:52.822 SQ Associations: Not Supported 00:24:52.822 UUID List: Not Supported 00:24:52.822 Multi-Domain Subsystem: Not Supported 00:24:52.822 Fixed Capacity Management: Not Supported 00:24:52.822 Variable Capacity Management: Not Supported 00:24:52.822 Delete Endurance Group: Not Supported 00:24:52.822 Delete NVM Set: Not Supported 00:24:52.822 Extended LBA Formats Supported: Not Supported 00:24:52.822 Flexible Data Placement Supported: Not Supported 00:24:52.822 00:24:52.822 Controller Memory Buffer Support 00:24:52.822 ================================ 00:24:52.822 Supported: No 00:24:52.822 00:24:52.822 Persistent Memory Region Support 00:24:52.822 ================================ 00:24:52.822 Supported: No 00:24:52.822 00:24:52.822 Admin Command Set Attributes 00:24:52.822 ============================ 00:24:52.822 Security Send/Receive: Not Supported 00:24:52.822 Format NVM: Not Supported 00:24:52.822 Firmware Activate/Download: Not Supported 00:24:52.822 Namespace Management: Not Supported 00:24:52.822 Device Self-Test: Not Supported 00:24:52.822 Directives: Not Supported 00:24:52.822 NVMe-MI: Not Supported 00:24:52.822 Virtualization Management: Not Supported 00:24:52.822 Doorbell Buffer Config: Not Supported 00:24:52.822 Get LBA Status Capability: Not Supported 00:24:52.822 Command & Feature Lockdown Capability: Not Supported 00:24:52.822 Abort Command Limit: 1 00:24:52.822 Async Event Request Limit: 1 00:24:52.822 Number of Firmware Slots: N/A 00:24:52.822 Firmware Slot 1 Read-Only: N/A 00:24:52.822 Firmware Activation Without Reset: N/A 00:24:52.822 Multiple Update Detection Support: N/A 00:24:52.822 Firmware Update Granularity: No Information Provided 00:24:52.822 Per-Namespace SMART Log: No 00:24:52.822 Asymmetric Namespace Access Log Page: Not Supported 00:24:52.822 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:52.822 Command Effects Log Page: Not Supported 00:24:52.822 Get Log Page Extended Data: Supported 00:24:52.822 Telemetry Log Pages: Not Supported 00:24:52.822 Persistent Event Log Pages: Not Supported 00:24:52.822 Supported Log Pages Log Page: May Support 00:24:52.822 Commands Supported & Effects Log Page: Not Supported 00:24:52.822 Feature Identifiers & Effects Log Page:May Support 00:24:52.822 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.822 Data Area 4 for Telemetry Log: Not Supported 00:24:52.822 Error Log Page Entries Supported: 1 00:24:52.822 Keep Alive: Not Supported 00:24:52.822 00:24:52.822 NVM Command Set Attributes 00:24:52.822 ========================== 00:24:52.822 Submission Queue Entry Size 00:24:52.822 Max: 1 00:24:52.822 Min: 1 00:24:52.822 Completion Queue Entry Size 00:24:52.822 Max: 1 00:24:52.822 Min: 1 00:24:52.822 Number of Namespaces: 0 00:24:52.822 Compare Command: Not Supported 00:24:52.822 Write Uncorrectable Command: Not Supported 00:24:52.822 Dataset Management Command: Not Supported 00:24:52.822 Write Zeroes Command: Not Supported 00:24:52.822 Set Features Save Field: Not Supported 00:24:52.822 Reservations: Not Supported 00:24:52.822 Timestamp: Not Supported 00:24:52.822 Copy: Not Supported 00:24:52.822 Volatile Write Cache: Not Present 00:24:52.822 Atomic Write Unit (Normal): 1 00:24:52.822 Atomic Write Unit (PFail): 1 00:24:52.822 Atomic Compare & Write Unit: 1 00:24:52.822 Fused Compare & Write: Not Supported 00:24:52.822 Scatter-Gather List 00:24:52.822 SGL Command Set: Supported 00:24:52.822 SGL Keyed: Not Supported 00:24:52.822 SGL Bit Bucket Descriptor: Not Supported 00:24:52.822 SGL Metadata Pointer: Not Supported 00:24:52.822 Oversized SGL: Not Supported 00:24:52.822 SGL Metadata Address: Not Supported 00:24:52.822 SGL Offset: Supported 00:24:52.822 Transport SGL Data Block: Not Supported 00:24:52.822 Replay Protected Memory Block: Not Supported 00:24:52.822 00:24:52.822 Firmware Slot Information 00:24:52.822 ========================= 00:24:52.822 Active slot: 0 00:24:52.822 00:24:52.822 00:24:52.822 Error Log 00:24:52.822 ========= 00:24:52.822 00:24:52.822 Active Namespaces 00:24:52.822 ================= 00:24:52.822 Discovery Log Page 00:24:52.822 ================== 00:24:52.822 Generation Counter: 2 00:24:52.822 Number of Records: 2 00:24:52.822 Record Format: 0 00:24:52.822 00:24:52.822 Discovery Log Entry 0 00:24:52.822 ---------------------- 00:24:52.822 Transport Type: 3 (TCP) 00:24:52.822 Address Family: 1 (IPv4) 00:24:52.822 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:52.822 Entry Flags: 00:24:52.822 Duplicate Returned Information: 0 00:24:52.822 Explicit Persistent Connection Support for Discovery: 0 00:24:52.822 Transport Requirements: 00:24:52.822 Secure Channel: Not Specified 00:24:52.822 Port ID: 1 (0x0001) 00:24:52.822 Controller ID: 65535 (0xffff) 00:24:52.822 Admin Max SQ Size: 32 00:24:52.822 Transport Service Identifier: 4420 00:24:52.822 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:52.822 Transport Address: 10.0.0.1 00:24:52.822 Discovery Log Entry 1 00:24:52.822 ---------------------- 00:24:52.822 Transport Type: 3 (TCP) 00:24:52.822 Address Family: 1 (IPv4) 00:24:52.822 Subsystem Type: 2 (NVM Subsystem) 00:24:52.822 Entry Flags: 00:24:52.822 Duplicate Returned Information: 0 00:24:52.822 Explicit Persistent Connection Support for Discovery: 0 00:24:52.822 Transport Requirements: 00:24:52.822 Secure Channel: Not Specified 00:24:52.822 Port ID: 1 (0x0001) 00:24:52.822 Controller ID: 65535 (0xffff) 00:24:52.822 Admin Max SQ Size: 32 00:24:52.822 Transport Service Identifier: 4420 00:24:52.822 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:52.822 Transport Address: 10.0.0.1 00:24:52.822 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.822 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.080 get_feature(0x01) failed 00:24:53.081 get_feature(0x02) failed 00:24:53.081 get_feature(0x04) failed 00:24:53.081 ===================================================== 00:24:53.081 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:53.081 ===================================================== 00:24:53.081 Controller Capabilities/Features 00:24:53.081 ================================ 00:24:53.081 Vendor ID: 0000 00:24:53.081 Subsystem Vendor ID: 0000 00:24:53.081 Serial Number: 7f7e7811f27b757ecd8d 00:24:53.081 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:53.081 Firmware Version: 6.7.0-68 00:24:53.081 Recommended Arb Burst: 6 00:24:53.081 IEEE OUI Identifier: 00 00 00 00:24:53.081 Multi-path I/O 00:24:53.081 May have multiple subsystem ports: Yes 00:24:53.081 May have multiple controllers: Yes 00:24:53.081 Associated with SR-IOV VF: No 00:24:53.081 Max Data Transfer Size: Unlimited 00:24:53.081 Max Number of Namespaces: 1024 00:24:53.081 Max Number of I/O Queues: 128 00:24:53.081 NVMe Specification Version (VS): 1.3 00:24:53.081 NVMe Specification Version (Identify): 1.3 00:24:53.081 Maximum Queue Entries: 1024 00:24:53.081 Contiguous Queues Required: No 00:24:53.081 Arbitration Mechanisms Supported 00:24:53.081 Weighted Round Robin: Not Supported 00:24:53.081 Vendor Specific: Not Supported 00:24:53.081 Reset Timeout: 7500 ms 00:24:53.081 Doorbell Stride: 4 bytes 00:24:53.081 NVM Subsystem Reset: Not Supported 00:24:53.081 Command Sets Supported 00:24:53.081 NVM Command Set: Supported 00:24:53.081 Boot Partition: Not Supported 00:24:53.081 Memory Page Size Minimum: 4096 bytes 00:24:53.081 Memory Page Size Maximum: 4096 bytes 00:24:53.081 Persistent Memory Region: Not Supported 00:24:53.081 Optional Asynchronous Events Supported 00:24:53.081 Namespace Attribute Notices: Supported 00:24:53.081 Firmware Activation Notices: Not Supported 00:24:53.081 ANA Change Notices: Supported 00:24:53.081 PLE Aggregate Log Change Notices: Not Supported 00:24:53.081 LBA Status Info Alert Notices: Not Supported 00:24:53.081 EGE Aggregate Log Change Notices: Not Supported 00:24:53.081 Normal NVM Subsystem Shutdown event: Not Supported 00:24:53.081 Zone Descriptor Change Notices: Not Supported 00:24:53.081 Discovery Log Change Notices: Not Supported 00:24:53.081 Controller Attributes 00:24:53.081 128-bit Host Identifier: Supported 00:24:53.081 Non-Operational Permissive Mode: Not Supported 00:24:53.081 NVM Sets: Not Supported 00:24:53.081 Read Recovery Levels: Not Supported 00:24:53.081 Endurance Groups: Not Supported 00:24:53.081 Predictable Latency Mode: Not Supported 00:24:53.081 Traffic Based Keep ALive: Supported 00:24:53.081 Namespace Granularity: Not Supported 00:24:53.081 SQ Associations: Not Supported 00:24:53.081 UUID List: Not Supported 00:24:53.081 Multi-Domain Subsystem: Not Supported 00:24:53.081 Fixed Capacity Management: Not Supported 00:24:53.081 Variable Capacity Management: Not Supported 00:24:53.081 Delete Endurance Group: Not Supported 00:24:53.081 Delete NVM Set: Not Supported 00:24:53.081 Extended LBA Formats Supported: Not Supported 00:24:53.081 Flexible Data Placement Supported: Not Supported 00:24:53.081 00:24:53.081 Controller Memory Buffer Support 00:24:53.081 ================================ 00:24:53.081 Supported: No 00:24:53.081 00:24:53.081 Persistent Memory Region Support 00:24:53.081 ================================ 00:24:53.081 Supported: No 00:24:53.081 00:24:53.081 Admin Command Set Attributes 00:24:53.081 ============================ 00:24:53.081 Security Send/Receive: Not Supported 00:24:53.081 Format NVM: Not Supported 00:24:53.081 Firmware Activate/Download: Not Supported 00:24:53.081 Namespace Management: Not Supported 00:24:53.081 Device Self-Test: Not Supported 00:24:53.081 Directives: Not Supported 00:24:53.081 NVMe-MI: Not Supported 00:24:53.081 Virtualization Management: Not Supported 00:24:53.081 Doorbell Buffer Config: Not Supported 00:24:53.081 Get LBA Status Capability: Not Supported 00:24:53.081 Command & Feature Lockdown Capability: Not Supported 00:24:53.081 Abort Command Limit: 4 00:24:53.081 Async Event Request Limit: 4 00:24:53.081 Number of Firmware Slots: N/A 00:24:53.081 Firmware Slot 1 Read-Only: N/A 00:24:53.081 Firmware Activation Without Reset: N/A 00:24:53.081 Multiple Update Detection Support: N/A 00:24:53.081 Firmware Update Granularity: No Information Provided 00:24:53.081 Per-Namespace SMART Log: Yes 00:24:53.081 Asymmetric Namespace Access Log Page: Supported 00:24:53.081 ANA Transition Time : 10 sec 00:24:53.081 00:24:53.081 Asymmetric Namespace Access Capabilities 00:24:53.081 ANA Optimized State : Supported 00:24:53.081 ANA Non-Optimized State : Supported 00:24:53.081 ANA Inaccessible State : Supported 00:24:53.081 ANA Persistent Loss State : Supported 00:24:53.081 ANA Change State : Supported 00:24:53.081 ANAGRPID is not changed : No 00:24:53.081 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:53.081 00:24:53.081 ANA Group Identifier Maximum : 128 00:24:53.081 Number of ANA Group Identifiers : 128 00:24:53.081 Max Number of Allowed Namespaces : 1024 00:24:53.081 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:53.081 Command Effects Log Page: Supported 00:24:53.081 Get Log Page Extended Data: Supported 00:24:53.081 Telemetry Log Pages: Not Supported 00:24:53.081 Persistent Event Log Pages: Not Supported 00:24:53.081 Supported Log Pages Log Page: May Support 00:24:53.081 Commands Supported & Effects Log Page: Not Supported 00:24:53.081 Feature Identifiers & Effects Log Page:May Support 00:24:53.081 NVMe-MI Commands & Effects Log Page: May Support 00:24:53.081 Data Area 4 for Telemetry Log: Not Supported 00:24:53.081 Error Log Page Entries Supported: 128 00:24:53.081 Keep Alive: Supported 00:24:53.081 Keep Alive Granularity: 1000 ms 00:24:53.081 00:24:53.081 NVM Command Set Attributes 00:24:53.081 ========================== 00:24:53.081 Submission Queue Entry Size 00:24:53.081 Max: 64 00:24:53.081 Min: 64 00:24:53.081 Completion Queue Entry Size 00:24:53.081 Max: 16 00:24:53.081 Min: 16 00:24:53.081 Number of Namespaces: 1024 00:24:53.081 Compare Command: Not Supported 00:24:53.081 Write Uncorrectable Command: Not Supported 00:24:53.081 Dataset Management Command: Supported 00:24:53.081 Write Zeroes Command: Supported 00:24:53.081 Set Features Save Field: Not Supported 00:24:53.081 Reservations: Not Supported 00:24:53.081 Timestamp: Not Supported 00:24:53.081 Copy: Not Supported 00:24:53.081 Volatile Write Cache: Present 00:24:53.081 Atomic Write Unit (Normal): 1 00:24:53.081 Atomic Write Unit (PFail): 1 00:24:53.081 Atomic Compare & Write Unit: 1 00:24:53.081 Fused Compare & Write: Not Supported 00:24:53.081 Scatter-Gather List 00:24:53.081 SGL Command Set: Supported 00:24:53.081 SGL Keyed: Not Supported 00:24:53.081 SGL Bit Bucket Descriptor: Not Supported 00:24:53.081 SGL Metadata Pointer: Not Supported 00:24:53.081 Oversized SGL: Not Supported 00:24:53.081 SGL Metadata Address: Not Supported 00:24:53.081 SGL Offset: Supported 00:24:53.081 Transport SGL Data Block: Not Supported 00:24:53.081 Replay Protected Memory Block: Not Supported 00:24:53.081 00:24:53.081 Firmware Slot Information 00:24:53.081 ========================= 00:24:53.081 Active slot: 0 00:24:53.081 00:24:53.081 Asymmetric Namespace Access 00:24:53.081 =========================== 00:24:53.081 Change Count : 0 00:24:53.081 Number of ANA Group Descriptors : 1 00:24:53.081 ANA Group Descriptor : 0 00:24:53.081 ANA Group ID : 1 00:24:53.081 Number of NSID Values : 1 00:24:53.081 Change Count : 0 00:24:53.081 ANA State : 1 00:24:53.081 Namespace Identifier : 1 00:24:53.081 00:24:53.081 Commands Supported and Effects 00:24:53.081 ============================== 00:24:53.081 Admin Commands 00:24:53.081 -------------- 00:24:53.081 Get Log Page (02h): Supported 00:24:53.081 Identify (06h): Supported 00:24:53.081 Abort (08h): Supported 00:24:53.081 Set Features (09h): Supported 00:24:53.081 Get Features (0Ah): Supported 00:24:53.081 Asynchronous Event Request (0Ch): Supported 00:24:53.081 Keep Alive (18h): Supported 00:24:53.081 I/O Commands 00:24:53.081 ------------ 00:24:53.081 Flush (00h): Supported 00:24:53.081 Write (01h): Supported LBA-Change 00:24:53.081 Read (02h): Supported 00:24:53.081 Write Zeroes (08h): Supported LBA-Change 00:24:53.081 Dataset Management (09h): Supported 00:24:53.081 00:24:53.081 Error Log 00:24:53.081 ========= 00:24:53.081 Entry: 0 00:24:53.081 Error Count: 0x3 00:24:53.081 Submission Queue Id: 0x0 00:24:53.081 Command Id: 0x5 00:24:53.081 Phase Bit: 0 00:24:53.082 Status Code: 0x2 00:24:53.082 Status Code Type: 0x0 00:24:53.082 Do Not Retry: 1 00:24:53.082 Error Location: 0x28 00:24:53.082 LBA: 0x0 00:24:53.082 Namespace: 0x0 00:24:53.082 Vendor Log Page: 0x0 00:24:53.082 ----------- 00:24:53.082 Entry: 1 00:24:53.082 Error Count: 0x2 00:24:53.082 Submission Queue Id: 0x0 00:24:53.082 Command Id: 0x5 00:24:53.082 Phase Bit: 0 00:24:53.082 Status Code: 0x2 00:24:53.082 Status Code Type: 0x0 00:24:53.082 Do Not Retry: 1 00:24:53.082 Error Location: 0x28 00:24:53.082 LBA: 0x0 00:24:53.082 Namespace: 0x0 00:24:53.082 Vendor Log Page: 0x0 00:24:53.082 ----------- 00:24:53.082 Entry: 2 00:24:53.082 Error Count: 0x1 00:24:53.082 Submission Queue Id: 0x0 00:24:53.082 Command Id: 0x4 00:24:53.082 Phase Bit: 0 00:24:53.082 Status Code: 0x2 00:24:53.082 Status Code Type: 0x0 00:24:53.082 Do Not Retry: 1 00:24:53.082 Error Location: 0x28 00:24:53.082 LBA: 0x0 00:24:53.082 Namespace: 0x0 00:24:53.082 Vendor Log Page: 0x0 00:24:53.082 00:24:53.082 Number of Queues 00:24:53.082 ================ 00:24:53.082 Number of I/O Submission Queues: 128 00:24:53.082 Number of I/O Completion Queues: 128 00:24:53.082 00:24:53.082 ZNS Specific Controller Data 00:24:53.082 ============================ 00:24:53.082 Zone Append Size Limit: 0 00:24:53.082 00:24:53.082 00:24:53.082 Active Namespaces 00:24:53.082 ================= 00:24:53.082 get_feature(0x05) failed 00:24:53.082 Namespace ID:1 00:24:53.082 Command Set Identifier: NVM (00h) 00:24:53.082 Deallocate: Supported 00:24:53.082 Deallocated/Unwritten Error: Not Supported 00:24:53.082 Deallocated Read Value: Unknown 00:24:53.082 Deallocate in Write Zeroes: Not Supported 00:24:53.082 Deallocated Guard Field: 0xFFFF 00:24:53.082 Flush: Supported 00:24:53.082 Reservation: Not Supported 00:24:53.082 Namespace Sharing Capabilities: Multiple Controllers 00:24:53.082 Size (in LBAs): 1953525168 (931GiB) 00:24:53.082 Capacity (in LBAs): 1953525168 (931GiB) 00:24:53.082 Utilization (in LBAs): 1953525168 (931GiB) 00:24:53.082 UUID: 4bfeb9c2-0734-4654-89f5-264f8ab4414c 00:24:53.082 Thin Provisioning: Not Supported 00:24:53.082 Per-NS Atomic Units: Yes 00:24:53.082 Atomic Boundary Size (Normal): 0 00:24:53.082 Atomic Boundary Size (PFail): 0 00:24:53.082 Atomic Boundary Offset: 0 00:24:53.082 NGUID/EUI64 Never Reused: No 00:24:53.082 ANA group ID: 1 00:24:53.082 Namespace Write Protected: No 00:24:53.082 Number of LBA Formats: 1 00:24:53.082 Current LBA Format: LBA Format #00 00:24:53.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:53.082 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.082 rmmod nvme_tcp 00:24:53.082 rmmod nvme_fabrics 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.082 07:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.983 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:55.241 07:30:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:56.175 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:56.175 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:56.175 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:56.433 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:57.366 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:57.366 00:24:57.366 real 0m9.427s 00:24:57.366 user 0m2.000s 00:24:57.366 sys 0m3.360s 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.366 ************************************ 00:24:57.366 END TEST nvmf_identify_kernel_target 00:24:57.366 ************************************ 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.366 ************************************ 00:24:57.366 START TEST nvmf_auth_host 00:24:57.366 ************************************ 00:24:57.366 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:57.366 * Looking for test storage... 00:24:57.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.624 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.625 07:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:59.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:59.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.526 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:59.527 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:59.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:24:59.527 00:24:59.527 --- 10.0.0.2 ping statistics --- 00:24:59.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.527 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:59.527 00:24:59.527 --- 10.0.0.1 ping statistics --- 00:24:59.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.527 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2557554 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2557554 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2557554 ']' 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:59.527 07:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=34a24cafc59099bf6ddbd47f74443ddc 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.d73 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 34a24cafc59099bf6ddbd47f74443ddc 0 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 34a24cafc59099bf6ddbd47f74443ddc 0 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=34a24cafc59099bf6ddbd47f74443ddc 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.d73 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.d73 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.d73 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae99ad094da3817e831f14908939ece193607168384c3a8887d9642522584693 00:25:00.902 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hLc 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae99ad094da3817e831f14908939ece193607168384c3a8887d9642522584693 3 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae99ad094da3817e831f14908939ece193607168384c3a8887d9642522584693 3 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae99ad094da3817e831f14908939ece193607168384c3a8887d9642522584693 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hLc 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hLc 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hLc 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6c7046da9e49a6f8d222281db3c51ac58cf99e7a0cdcded 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XgQ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6c7046da9e49a6f8d222281db3c51ac58cf99e7a0cdcded 0 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6c7046da9e49a6f8d222281db3c51ac58cf99e7a0cdcded 0 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6c7046da9e49a6f8d222281db3c51ac58cf99e7a0cdcded 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XgQ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XgQ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XgQ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5b6ff6918c40d9356a1816c871c591eb68ea180d8fe96a8 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GWZ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5b6ff6918c40d9356a1816c871c591eb68ea180d8fe96a8 2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5b6ff6918c40d9356a1816c871c591eb68ea180d8fe96a8 2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5b6ff6918c40d9356a1816c871c591eb68ea180d8fe96a8 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GWZ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GWZ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GWZ 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=304e491a9c710cd8559a5950ea4b0774 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.h25 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 304e491a9c710cd8559a5950ea4b0774 1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 304e491a9c710cd8559a5950ea4b0774 1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=304e491a9c710cd8559a5950ea4b0774 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.h25 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.h25 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.h25 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=51c0bef441c8737b8c0840e2815e49b2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YST 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 51c0bef441c8737b8c0840e2815e49b2 1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 51c0bef441c8737b8c0840e2815e49b2 1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=51c0bef441c8737b8c0840e2815e49b2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YST 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YST 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.YST 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7341425a064f5c58493660af9ce23deb8c0d13aac2392042 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4AD 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7341425a064f5c58493660af9ce23deb8c0d13aac2392042 2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7341425a064f5c58493660af9ce23deb8c0d13aac2392042 2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7341425a064f5c58493660af9ce23deb8c0d13aac2392042 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4AD 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4AD 00:25:00.903 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4AD 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7118f336bc445e02a7dc5c008d593e4e 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.A46 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7118f336bc445e02a7dc5c008d593e4e 0 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7118f336bc445e02a7dc5c008d593e4e 0 00:25:00.904 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7118f336bc445e02a7dc5c008d593e4e 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.A46 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.A46 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.A46 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d8335854f423c91333167badb7b9513d5ce1d1506733b86b9511ee8e7d19e96 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UF8 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d8335854f423c91333167badb7b9513d5ce1d1506733b86b9511ee8e7d19e96 3 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d8335854f423c91333167badb7b9513d5ce1d1506733b86b9511ee8e7d19e96 3 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d8335854f423c91333167badb7b9513d5ce1d1506733b86b9511ee8e7d19e96 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UF8 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UF8 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UF8 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2557554 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2557554 ']' 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.161 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.d73 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hLc ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hLc 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XgQ 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GWZ ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GWZ 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.h25 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.YST ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YST 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4AD 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.A46 ]] 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.A46 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.419 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UF8 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:01.420 07:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:02.352 Waiting for block devices as requested 00:25:02.610 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:02.610 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:02.892 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:02.892 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:02.892 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:02.892 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:03.149 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:03.149 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:03.149 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:03.149 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:03.406 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:03.406 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:03.406 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:03.406 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:03.663 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:03.663 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:03.663 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:04.228 No valid GPT data, bailing 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:25:04.228 00:25:04.228 Discovery Log Number of Records 2, Generation counter 2 00:25:04.228 =====Discovery Log Entry 0====== 00:25:04.228 trtype: tcp 00:25:04.228 adrfam: ipv4 00:25:04.228 subtype: current discovery subsystem 00:25:04.228 treq: not specified, sq flow control disable supported 00:25:04.228 portid: 1 00:25:04.228 trsvcid: 4420 00:25:04.228 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:04.228 traddr: 10.0.0.1 00:25:04.228 eflags: none 00:25:04.228 sectype: none 00:25:04.228 =====Discovery Log Entry 1====== 00:25:04.228 trtype: tcp 00:25:04.228 adrfam: ipv4 00:25:04.228 subtype: nvme subsystem 00:25:04.228 treq: not specified, sq flow control disable supported 00:25:04.228 portid: 1 00:25:04.228 trsvcid: 4420 00:25:04.228 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:04.228 traddr: 10.0.0.1 00:25:04.228 eflags: none 00:25:04.228 sectype: none 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:04.228 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.229 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.487 nvme0n1 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.487 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.488 07:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 nvme0n1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.746 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.004 nvme0n1 00:25:05.004 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.004 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.005 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.293 nvme0n1 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.293 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.294 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.552 nvme0n1 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.552 07:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.552 nvme0n1 00:25:05.552 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.552 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.552 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.552 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.552 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.552 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 nvme0n1 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.810 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 nvme0n1 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.069 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.326 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.326 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.327 nvme0n1 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.327 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.585 07:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.585 nvme0n1 00:25:06.585 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.585 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.585 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.585 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.585 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.843 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.844 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.102 nvme0n1 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.102 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.360 nvme0n1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.360 07:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.618 nvme0n1 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.618 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.876 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.877 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.135 nvme0n1 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.135 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 nvme0n1 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.392 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.393 07:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 nvme0n1 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.958 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.524 nvme0n1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.524 07:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.088 nvme0n1 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.088 07:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.655 nvme0n1 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:10.655 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.656 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.221 nvme0n1 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.221 07:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.787 nvme0n1 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.787 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.045 07:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.978 nvme0n1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.978 07:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 nvme0n1 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:13.910 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.911 07:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.282 nvme0n1 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.282 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.283 07:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.212 nvme0n1 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.212 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.213 07:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.144 nvme0n1 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.145 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.403 nvme0n1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.403 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 nvme0n1 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.662 07:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 nvme0n1 00:25:17.662 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.663 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.663 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.663 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.663 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.663 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.921 nvme0n1 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.921 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:17.922 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 nvme0n1 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:18.180 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.181 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.438 nvme0n1 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.438 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.439 07:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.725 nvme0n1 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.725 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 nvme0n1 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.983 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.241 nvme0n1 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.242 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.500 nvme0n1 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.500 07:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.500 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.501 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.501 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.067 nvme0n1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.067 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.325 nvme0n1 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.325 07:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.583 nvme0n1 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:20.583 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.841 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.098 nvme0n1 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.098 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.355 nvme0n1 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:21.355 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.356 07:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.921 nvme0n1 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.921 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.179 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.744 nvme0n1 00:25:22.744 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.744 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.744 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.744 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.744 07:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.744 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.310 nvme0n1 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.310 07:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.876 nvme0n1 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.876 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.441 nvme0n1 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:24.441 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.442 07:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 nvme0n1 00:25:25.374 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.374 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.374 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.374 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.374 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.632 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.633 07:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.566 nvme0n1 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.566 07:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.501 nvme0n1 00:25:27.501 07:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.501 07:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.501 07:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.501 07:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.501 07:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.501 07:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.501 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.759 07:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.692 nvme0n1 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.692 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.693 07:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 nvme0n1 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.625 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.883 nvme0n1 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.883 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.884 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.142 nvme0n1 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.142 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.400 nvme0n1 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.400 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.657 nvme0n1 00:25:30.657 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.657 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.657 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.657 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.657 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.657 07:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.657 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.658 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.915 nvme0n1 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.916 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 nvme0n1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.174 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.432 nvme0n1 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:31.432 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.433 07:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.691 nvme0n1 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.691 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.692 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.692 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.949 nvme0n1 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.949 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.236 nvme0n1 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.236 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.496 nvme0n1 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.496 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.497 07:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.755 nvme0n1 00:25:32.755 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.755 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.755 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.755 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.755 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.755 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.012 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.013 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.270 nvme0n1 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.270 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.271 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.528 nvme0n1 00:25:33.528 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.528 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.528 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.528 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.528 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.528 07:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.528 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 nvme0n1 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.093 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.657 nvme0n1 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.657 07:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.658 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.224 nvme0n1 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.224 07:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 nvme0n1 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.790 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.354 nvme0n1 00:25:36.354 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.354 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.354 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.354 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.354 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.354 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.612 07:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.178 nvme0n1 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.178 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMjRjYWZjNTkwOTliZjZkZGJkNDdmNzQ0NDNkZGNjDwBP: 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: ]] 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU5OWFkMDk0ZGEzODE3ZTgzMWYxNDkwODkzOWVjZTE5MzYwNzE2ODM4NGMzYTg4ODdkOTY0MjUyMjU4NDY5M1gfASE=: 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.179 07:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.113 nvme0n1 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.113 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.114 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.114 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.114 07:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.486 nvme0n1 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzA0ZTQ5MWE5YzcxMGNkODU1OWE1OTUwZWE0YjA3NzQLf+Ey: 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjMGJlZjQ0MWM4NzM3YjhjMDg0MGUyODE1ZTQ5YjKxbYdp: 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.486 07:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.419 nvme0n1 00:25:40.419 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.419 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.419 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.419 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM0MTQyNWEwNjRmNWM1ODQ5MzY2MGFmOWNlMjNkZWI4YzBkMTNhYWMyMzkyMDQyCRcaKw==: 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzExOGYzMzZiYzQ0NWUwMmE3ZGM1YzAwOGQ1OTNlNGW6m2Ku: 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.420 07:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.353 nvme0n1 00:25:41.353 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.353 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.353 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.353 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.353 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.353 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmQ4MzM1ODU0ZjQyM2M5MTMzMzE2N2JhZGI3Yjk1MTNkNWNlMWQxNTA2NzMzYjg2Yjk1MTFlZThlN2QxOWU5NnYFfRM=: 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.354 07:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.287 nvme0n1 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZjNzA0NmRhOWU0OWE2ZjhkMjIyMjgxZGIzYzUxYWM1OGNmOTllN2EwY2RjZGVkOFdy9A==: 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNmZmNjkxOGM0MGQ5MzU2YTE4MTZjODcxYzU5MWViNjhlYTE4MGQ4ZmU5NmE40/ygVA==: 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.287 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.288 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.546 request: 00:25:42.546 { 00:25:42.546 "name": "nvme0", 00:25:42.546 "trtype": "tcp", 00:25:42.546 "traddr": "10.0.0.1", 00:25:42.546 "adrfam": "ipv4", 00:25:42.546 "trsvcid": "4420", 00:25:42.546 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.546 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.546 "prchk_reftag": false, 00:25:42.546 "prchk_guard": false, 00:25:42.546 "hdgst": false, 00:25:42.546 "ddgst": false, 00:25:42.546 "method": "bdev_nvme_attach_controller", 00:25:42.546 "req_id": 1 00:25:42.546 } 00:25:42.546 Got JSON-RPC error response 00:25:42.546 response: 00:25:42.546 { 00:25:42.546 "code": -5, 00:25:42.546 "message": "Input/output error" 00:25:42.546 } 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.546 request: 00:25:42.546 { 00:25:42.546 "name": "nvme0", 00:25:42.546 "trtype": "tcp", 00:25:42.546 "traddr": "10.0.0.1", 00:25:42.546 "adrfam": "ipv4", 00:25:42.546 "trsvcid": "4420", 00:25:42.546 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.546 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.546 "prchk_reftag": false, 00:25:42.546 "prchk_guard": false, 00:25:42.546 "hdgst": false, 00:25:42.546 "ddgst": false, 00:25:42.546 "dhchap_key": "key2", 00:25:42.546 "method": "bdev_nvme_attach_controller", 00:25:42.546 "req_id": 1 00:25:42.546 } 00:25:42.546 Got JSON-RPC error response 00:25:42.546 response: 00:25:42.546 { 00:25:42.546 "code": -5, 00:25:42.546 "message": "Input/output error" 00:25:42.546 } 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.546 07:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.546 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.826 request: 00:25:42.826 { 00:25:42.826 "name": "nvme0", 00:25:42.826 "trtype": "tcp", 00:25:42.826 "traddr": "10.0.0.1", 00:25:42.826 "adrfam": "ipv4", 00:25:42.826 "trsvcid": "4420", 00:25:42.826 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.826 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.826 "prchk_reftag": false, 00:25:42.826 "prchk_guard": false, 00:25:42.826 "hdgst": false, 00:25:42.826 "ddgst": false, 00:25:42.826 "dhchap_key": "key1", 00:25:42.826 "dhchap_ctrlr_key": "ckey2", 00:25:42.826 "method": "bdev_nvme_attach_controller", 00:25:42.826 "req_id": 1 00:25:42.826 } 00:25:42.826 Got JSON-RPC error response 00:25:42.826 response: 00:25:42.826 { 00:25:42.826 "code": -5, 00:25:42.826 "message": "Input/output error" 00:25:42.826 } 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.826 rmmod nvme_tcp 00:25:42.826 rmmod nvme_fabrics 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2557554 ']' 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2557554 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2557554 ']' 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2557554 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.826 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2557554 00:25:42.827 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:42.827 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:42.827 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2557554' 00:25:42.827 killing process with pid 2557554 00:25:42.827 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2557554 00:25:42.827 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2557554 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.086 07:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:44.988 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:45.246 07:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.181 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:46.181 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:46.439 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:46.439 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:46.439 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:46.439 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:46.439 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:46.439 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:46.439 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:47.402 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:47.402 07:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.d73 /tmp/spdk.key-null.XgQ /tmp/spdk.key-sha256.h25 /tmp/spdk.key-sha384.4AD /tmp/spdk.key-sha512.UF8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:47.402 07:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:48.777 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:48.777 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:48.777 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:48.777 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:48.777 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:48.777 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:48.777 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:48.777 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:48.777 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:48.777 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:48.777 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:48.777 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:48.777 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:48.777 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:48.777 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:48.777 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:48.777 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:48.777 00:25:48.777 real 0m51.329s 00:25:48.777 user 0m49.326s 00:25:48.777 sys 0m5.810s 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.777 ************************************ 00:25:48.777 END TEST nvmf_auth_host 00:25:48.777 ************************************ 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.777 ************************************ 00:25:48.777 START TEST nvmf_digest 00:25:48.777 ************************************ 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:48.777 * Looking for test storage... 00:25:48.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.777 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.778 07:31:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.306 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:51.307 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:51.307 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:51.307 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:51.307 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:51.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:25:51.307 00:25:51.307 --- 10.0.0.2 ping statistics --- 00:25:51.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.307 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:25:51.307 00:25:51.307 --- 10.0.0.1 ping statistics --- 00:25:51.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.307 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:51.307 ************************************ 00:25:51.307 START TEST nvmf_digest_clean 00:25:51.307 ************************************ 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2567168 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2567168 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2567168 ']' 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.307 [2024-07-25 07:31:23.502636] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:25:51.307 [2024-07-25 07:31:23.502740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.307 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.307 [2024-07-25 07:31:23.566946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.307 [2024-07-25 07:31:23.675795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.307 [2024-07-25 07:31:23.675852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.307 [2024-07-25 07:31:23.675880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.307 [2024-07-25 07:31:23.675891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.307 [2024-07-25 07:31:23.675901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.307 [2024-07-25 07:31:23.675932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.307 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 null0 00:25:51.565 [2024-07-25 07:31:23.851130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.565 [2024-07-25 07:31:23.875398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2567198 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2567198 /var/tmp/bperf.sock 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2567198 ']' 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.565 07:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 [2024-07-25 07:31:23.922053] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:25:51.565 [2024-07-25 07:31:23.922132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567198 ] 00:25:51.565 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.565 [2024-07-25 07:31:23.983316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.822 [2024-07-25 07:31:24.100503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.822 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.822 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:51.822 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:51.822 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:51.822 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:52.080 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.080 07:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.644 nvme0n1 00:25:52.644 07:31:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:52.644 07:31:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.644 Running I/O for 2 seconds... 00:25:55.171 00:25:55.171 Latency(us) 00:25:55.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.171 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:55.172 nvme0n1 : 2.01 13221.94 51.65 0.00 0.00 9671.88 4660.34 21359.88 00:25:55.172 =================================================================================================================== 00:25:55.172 Total : 13221.94 51.65 0.00 0.00 9671.88 4660.34 21359.88 00:25:55.172 0 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:55.172 | select(.opcode=="crc32c") 00:25:55.172 | "\(.module_name) \(.executed)"' 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2567198 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2567198 ']' 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2567198 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2567198 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2567198' 00:25:55.172 killing process with pid 2567198 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2567198 00:25:55.172 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.172 00:25:55.172 Latency(us) 00:25:55.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.172 =================================================================================================================== 00:25:55.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.172 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2567198 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2567634 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2567634 /var/tmp/bperf.sock 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2567634 ']' 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.430 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:55.430 [2024-07-25 07:31:27.784110] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:25:55.430 [2024-07-25 07:31:27.784186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567634 ] 00:25:55.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.430 Zero copy mechanism will not be used. 00:25:55.430 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.430 [2024-07-25 07:31:27.848480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.688 [2024-07-25 07:31:27.964558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.688 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.688 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:55.688 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:55.688 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:55.688 07:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:55.945 07:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.945 07:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.202 nvme0n1 00:25:56.202 07:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:56.202 07:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.459 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.459 Zero copy mechanism will not be used. 00:25:56.459 Running I/O for 2 seconds... 00:25:58.356 00:25:58.356 Latency(us) 00:25:58.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.356 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:58.356 nvme0n1 : 2.00 2604.96 325.62 0.00 0.00 6138.08 5582.70 13786.83 00:25:58.356 =================================================================================================================== 00:25:58.356 Total : 2604.96 325.62 0.00 0.00 6138.08 5582.70 13786.83 00:25:58.356 0 00:25:58.356 07:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:58.356 07:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:58.356 07:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:58.356 07:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:58.356 07:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:58.356 | select(.opcode=="crc32c") 00:25:58.356 | "\(.module_name) \(.executed)"' 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2567634 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2567634 ']' 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2567634 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:58.613 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.614 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2567634 00:25:58.614 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:58.614 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:58.614 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2567634' 00:25:58.614 killing process with pid 2567634 00:25:58.614 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2567634 00:25:58.614 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.614 00:25:58.614 Latency(us) 00:25:58.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.614 =================================================================================================================== 00:25:58.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.614 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2567634 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2568127 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2568127 /var/tmp/bperf.sock 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2568127 ']' 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.871 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:58.871 [2024-07-25 07:31:31.386981] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:25:58.871 [2024-07-25 07:31:31.387060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568127 ] 00:25:59.129 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.129 [2024-07-25 07:31:31.448349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.129 [2024-07-25 07:31:31.562153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.129 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.129 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:59.130 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:59.130 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:59.130 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:59.694 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.695 07:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.952 nvme0n1 00:25:59.952 07:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:59.952 07:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.209 Running I/O for 2 seconds... 00:26:02.107 00:26:02.107 Latency(us) 00:26:02.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.107 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.107 nvme0n1 : 2.00 21231.50 82.94 0.00 0.00 6022.12 2463.67 10825.58 00:26:02.107 =================================================================================================================== 00:26:02.107 Total : 21231.50 82.94 0.00 0.00 6022.12 2463.67 10825.58 00:26:02.107 0 00:26:02.107 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:02.107 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:02.107 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.107 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.107 | select(.opcode=="crc32c") 00:26:02.107 | "\(.module_name) \(.executed)"' 00:26:02.107 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2568127 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2568127 ']' 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2568127 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2568127 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2568127' 00:26:02.365 killing process with pid 2568127 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2568127 00:26:02.365 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.365 00:26:02.365 Latency(us) 00:26:02.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.365 =================================================================================================================== 00:26:02.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.365 07:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2568127 00:26:02.931 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:02.931 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:02.931 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:02.931 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2568539 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2568539 /var/tmp/bperf.sock 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2568539 ']' 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.932 [2024-07-25 07:31:35.210691] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:02.932 [2024-07-25 07:31:35.210768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568539 ] 00:26:02.932 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.932 Zero copy mechanism will not be used. 00:26:02.932 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.932 [2024-07-25 07:31:35.271675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.932 [2024-07-25 07:31:35.388837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:02.932 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.531 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.531 07:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.789 nvme0n1 00:26:03.789 07:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:03.789 07:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.789 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.789 Zero copy mechanism will not be used. 00:26:03.789 Running I/O for 2 seconds... 00:26:05.685 00:26:05.685 Latency(us) 00:26:05.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.686 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:05.686 nvme0n1 : 2.01 2617.22 327.15 0.00 0.00 6101.08 3422.44 9417.77 00:26:05.686 =================================================================================================================== 00:26:05.686 Total : 2617.22 327.15 0.00 0.00 6101.08 3422.44 9417.77 00:26:05.686 0 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:05.943 | select(.opcode=="crc32c") 00:26:05.943 | "\(.module_name) \(.executed)"' 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:05.943 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2568539 00:26:05.944 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2568539 ']' 00:26:05.944 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2568539 00:26:05.944 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2568539 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2568539' 00:26:06.202 killing process with pid 2568539 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2568539 00:26:06.202 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.202 00:26:06.202 Latency(us) 00:26:06.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.202 =================================================================================================================== 00:26:06.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.202 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2568539 00:26:06.460 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2567168 00:26:06.460 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2567168 ']' 00:26:06.460 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2567168 00:26:06.460 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:06.460 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.461 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2567168 00:26:06.461 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:06.461 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:06.461 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2567168' 00:26:06.461 killing process with pid 2567168 00:26:06.461 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2567168 00:26:06.461 07:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2567168 00:26:06.719 00:26:06.719 real 0m15.645s 00:26:06.719 user 0m30.534s 00:26:06.719 sys 0m4.295s 00:26:06.719 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:06.719 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.719 ************************************ 00:26:06.719 END TEST nvmf_digest_clean 00:26:06.720 ************************************ 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:06.720 ************************************ 00:26:06.720 START TEST nvmf_digest_error 00:26:06.720 ************************************ 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2569041 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2569041 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2569041 ']' 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.720 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.720 [2024-07-25 07:31:39.192928] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:06.720 [2024-07-25 07:31:39.193025] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.720 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.979 [2024-07-25 07:31:39.257644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.979 [2024-07-25 07:31:39.368046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.979 [2024-07-25 07:31:39.368122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.979 [2024-07-25 07:31:39.368142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.979 [2024-07-25 07:31:39.368160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.979 [2024-07-25 07:31:39.368174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.979 [2024-07-25 07:31:39.368207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.979 [2024-07-25 07:31:39.432839] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.979 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.237 null0 00:26:07.237 [2024-07-25 07:31:39.548916] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.237 [2024-07-25 07:31:39.573128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2569120 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2569120 /var/tmp/bperf.sock 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2569120 ']' 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.237 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.237 [2024-07-25 07:31:39.620174] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:07.237 [2024-07-25 07:31:39.620264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569120 ] 00:26:07.237 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.237 [2024-07-25 07:31:39.680813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.496 [2024-07-25 07:31:39.798568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.496 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:07.496 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:07.496 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:07.496 07:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:07.754 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:07.754 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.754 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.754 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.754 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.754 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.011 nvme0n1 00:26:08.011 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:08.011 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.011 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.011 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.011 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:08.011 07:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.270 Running I/O for 2 seconds... 00:26:08.270 [2024-07-25 07:31:40.664151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.664203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.664226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.676864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.676900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.676920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.692452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.692484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.692501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.705308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.705339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.718848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.718893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.718913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.733740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.733787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.745289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.745317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.745333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.760820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.760848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.760864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.774707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.774737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.774755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.270 [2024-07-25 07:31:40.786760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.270 [2024-07-25 07:31:40.786792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.270 [2024-07-25 07:31:40.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.800135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.800165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.800182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.814640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.814670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.814687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.827660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.827694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.827713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.840173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.840202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.840219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.856656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.856690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.856709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.871353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.871384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.871417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.884315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.884342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.884357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.898013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.898044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.898061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.910630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.910659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.910676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.924707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.924738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.924754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.939640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.939669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.939685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.950698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.950727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.950749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.964644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.964674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.964691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.980112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.980146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.980165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:40.992518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:40.992562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:40.992579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:41.004739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:41.004773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:41.004792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:41.019423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:41.019452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:41.019468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:41.033422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:41.033452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:41.033468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:41.045292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:41.045322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:41.045339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.534 [2024-07-25 07:31:41.060616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.534 [2024-07-25 07:31:41.060646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.534 [2024-07-25 07:31:41.060664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.076351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.076382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.076399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.089084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.089118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.089137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.104376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.104405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.104421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.115988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.116015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.116030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.130820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.130864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.130881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.143655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.143688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.143707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.158911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.158945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.158964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.176550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.792 [2024-07-25 07:31:41.176594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.792 [2024-07-25 07:31:41.176610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.792 [2024-07-25 07:31:41.188917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.188960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.188980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.202650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.202684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.202703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.217436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.217482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.217499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.229423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.229452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.229468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.244078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.244107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.244123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.256192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.256226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.256254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.271457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.271489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.271506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.283419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.283471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.283487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.297421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.297452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.297468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.793 [2024-07-25 07:31:41.311748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:08.793 [2024-07-25 07:31:41.311782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.793 [2024-07-25 07:31:41.311799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.324042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.324072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.337333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.337364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.337381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.350022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.350053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.350070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.364008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.364038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.364056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.375344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.375375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.375392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.388297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.388327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.388344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.400926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.400956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.400973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.413983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.414035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.414052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.425690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.425718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.425734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.438808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.438839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.438856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.452330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.452361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.452379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.465837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.465868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.465885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.476685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.476713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.476729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.490009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.490040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.490057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.504007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.504037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.504054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.518009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.518040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.518057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.529711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.529740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.529764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.543724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.543754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.543771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.555648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.555675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.555691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.051 [2024-07-25 07:31:41.569266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.051 [2024-07-25 07:31:41.569294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.051 [2024-07-25 07:31:41.569310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.582720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.582749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.582766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.593708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.593738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.593755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.606924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.606967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.606983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.620979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.621010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.621027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.632448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.632477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.632493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.645883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.645922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.645939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.657846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.657873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.672412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.672441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.672459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.686797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.686827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.686844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.697909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.697940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.697956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.713156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.713185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.713200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.723812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.723839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.723854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.738980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.739008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.739023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.752793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.752824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.752849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.763951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.763981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.763998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.777876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.777906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.777924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.788817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.788844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.788875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.803775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.310 [2024-07-25 07:31:41.803805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.310 [2024-07-25 07:31:41.803822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.310 [2024-07-25 07:31:41.815422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.311 [2024-07-25 07:31:41.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.311 [2024-07-25 07:31:41.815484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.311 [2024-07-25 07:31:41.828309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.311 [2024-07-25 07:31:41.828339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.311 [2024-07-25 07:31:41.828355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.840669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.840697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.840712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.855398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.855428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.855445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.866971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.867022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.881968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.882013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.882029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.897560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.897593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.897611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.908363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.908393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.908409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.923583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.923612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.569 [2024-07-25 07:31:41.923627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.569 [2024-07-25 07:31:41.939588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.569 [2024-07-25 07:31:41.939634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:41.939651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:41.950212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:41.950262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:41.950279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:41.964030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:41.964060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:41.964076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:41.978580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:41.978610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:41.978627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:41.989780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:41.989811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:41.989827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.004836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.004864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.004880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.016949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.016979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.016997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.030332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.030363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.030380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.042741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.042770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.042786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.056898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.056929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.056945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.067846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.067873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.067888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.081929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.081957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.081972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.570 [2024-07-25 07:31:42.095133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.570 [2024-07-25 07:31:42.095175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.570 [2024-07-25 07:31:42.095201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.828 [2024-07-25 07:31:42.108331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.828 [2024-07-25 07:31:42.108361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.828 [2024-07-25 07:31:42.108379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.828 [2024-07-25 07:31:42.119312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.828 [2024-07-25 07:31:42.119340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.828 [2024-07-25 07:31:42.119356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.828 [2024-07-25 07:31:42.133904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.828 [2024-07-25 07:31:42.133935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.828 [2024-07-25 07:31:42.133952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.828 [2024-07-25 07:31:42.148115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.828 [2024-07-25 07:31:42.148145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.828 [2024-07-25 07:31:42.148162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.828 [2024-07-25 07:31:42.158982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.828 [2024-07-25 07:31:42.159010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.828 [2024-07-25 07:31:42.159025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.173788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.173818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.173834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.185290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.185319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.185336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.198923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.198951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.198967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.213353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.213404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.226015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.226045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.226062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.240654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.240684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.240717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.253554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.253584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.253600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.266328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.266370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.266385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.281639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.281674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.281693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.295348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.295376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.295392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.307199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.307253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.307275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.321475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.321505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.321522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.335362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.335392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.335408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.829 [2024-07-25 07:31:42.348116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:09.829 [2024-07-25 07:31:42.348149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.829 [2024-07-25 07:31:42.348168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.361872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.361905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.374809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.374842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.374861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.389578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.389607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.389623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.401792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.401825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.401844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.415452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.415479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.415495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.429155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.429186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.429203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.442364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.442394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.442416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.454883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.454916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.454935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.470055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.470088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.470107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.481356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.481399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.481414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.495932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.495962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.495977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.507664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.507698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.507716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.523107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.523142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.523161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.536608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.536652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.536669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.549325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.549369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.549386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.562665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.562694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.562710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.576094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.576123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.576138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.590319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.590347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.590378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.087 [2024-07-25 07:31:42.602311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.087 [2024-07-25 07:31:42.602339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-07-25 07:31:42.602354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.345 [2024-07-25 07:31:42.617874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.345 [2024-07-25 07:31:42.617906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.345 [2024-07-25 07:31:42.617923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.345 [2024-07-25 07:31:42.634366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.345 [2024-07-25 07:31:42.634396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.345 [2024-07-25 07:31:42.634413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.345 [2024-07-25 07:31:42.645103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cccd20) 00:26:10.345 [2024-07-25 07:31:42.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.345 [2024-07-25 07:31:42.645153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.345 00:26:10.345 Latency(us) 00:26:10.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.345 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:10.345 nvme0n1 : 2.00 18977.56 74.13 0.00 0.00 6735.60 3325.35 20486.07 00:26:10.345 =================================================================================================================== 00:26:10.345 Total : 18977.56 74.13 0.00 0.00 6735.60 3325.35 20486.07 00:26:10.345 0 00:26:10.345 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:10.345 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:10.345 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:10.345 | .driver_specific 00:26:10.345 | .nvme_error 00:26:10.345 | .status_code 00:26:10.345 | .command_transient_transport_error' 00:26:10.345 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2569120 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2569120 ']' 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2569120 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2569120 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2569120' 00:26:10.603 killing process with pid 2569120 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2569120 00:26:10.603 Received shutdown signal, test time was about 2.000000 seconds 00:26:10.603 00:26:10.603 Latency(us) 00:26:10.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.603 =================================================================================================================== 00:26:10.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.603 07:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2569120 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2569530 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2569530 /var/tmp/bperf.sock 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2569530 ']' 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:10.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.860 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:10.860 [2024-07-25 07:31:43.267008] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:10.860 [2024-07-25 07:31:43.267086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569530 ] 00:26:10.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.860 Zero copy mechanism will not be used. 00:26:10.860 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.860 [2024-07-25 07:31:43.326671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.117 [2024-07-25 07:31:43.436732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.117 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.117 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:11.117 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:11.117 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:11.375 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:11.375 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.375 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.375 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.375 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.375 07:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.633 nvme0n1 00:26:11.633 07:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:11.633 07:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.633 07:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.633 07:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.633 07:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:11.633 07:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:11.890 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.890 Zero copy mechanism will not be used. 00:26:11.890 Running I/O for 2 seconds... 00:26:11.890 [2024-07-25 07:31:44.267062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.267122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.267145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.278838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.278871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.278898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.289265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.289316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.289334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.299621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.299656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.299676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.309594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.309629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.309648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.319776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.319810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.319829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.330066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.330120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.340469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.340498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.340514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.350504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.350532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.360870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.360904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.360923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.371320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.371349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.371366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.381858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.381894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.381914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.392175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.392209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.402664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.402698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.891 [2024-07-25 07:31:44.412721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:11.891 [2024-07-25 07:31:44.412756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.891 [2024-07-25 07:31:44.412775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.149 [2024-07-25 07:31:44.422423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.149 [2024-07-25 07:31:44.422454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.149 [2024-07-25 07:31:44.422471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.149 [2024-07-25 07:31:44.431921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.149 [2024-07-25 07:31:44.431954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.149 [2024-07-25 07:31:44.431973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.441321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.441350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.441366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.450646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.450678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.450703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.460193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.460224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.460250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.469652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.469684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.469703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.479165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.479198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.479216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.488602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.488634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.488653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.498006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.498038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.498057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.507551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.507583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.507602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.517132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.517164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.517183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.526662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.526695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.526714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.536155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.536193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.536212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.545405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.545433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.554648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.554680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.554698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.564084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.564135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.573519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.573562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.573581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.582939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.582971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.582989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.592496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.592524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.592557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.602014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.602046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.602065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.611376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.611404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.611420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.621060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.621093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.621111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.630494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.630538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.630557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.640171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.640203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.640221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.649550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.649596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.649614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.659130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.659163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.659181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.150 [2024-07-25 07:31:44.668850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.150 [2024-07-25 07:31:44.668882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.150 [2024-07-25 07:31:44.668900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.408 [2024-07-25 07:31:44.678621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.678649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.678680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.688439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.688482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.688498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.697861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.697893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.697918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.707391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.707418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.707434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.716862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.716893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.716912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.726234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.726275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.726294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.735925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.735958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.735977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.745438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.745467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.745483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.755003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.755035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.755053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.764517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.764545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.764561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.774312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.774341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.774356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.783761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.783800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.783820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.793406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.793436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.793453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.802936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.802969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.802987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.812435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.812466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.812482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.821833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.821867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.821886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.831374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.831403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.831419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.840844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.840878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.840896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.850269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.850301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.850319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.859811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.859842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.859860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.869468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.869496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.869512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.878997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.879030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.879048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.888568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.888600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.888618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.898220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.898261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.898282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.907576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.907622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.907641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.917110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.409 [2024-07-25 07:31:44.917142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.409 [2024-07-25 07:31:44.917161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.409 [2024-07-25 07:31:44.926605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.410 [2024-07-25 07:31:44.926637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.410 [2024-07-25 07:31:44.926655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.410 [2024-07-25 07:31:44.935989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.410 [2024-07-25 07:31:44.936035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.410 [2024-07-25 07:31:44.936053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:44.945842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:44.945880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:44.945900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:44.955440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:44.955468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:44.955483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:44.964858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:44.964890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:44.964908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:44.974440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:44.974482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:44.974498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:44.984018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:44.984050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:44.984068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:44.993630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:44.993665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:44.993684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.668 [2024-07-25 07:31:45.003100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.668 [2024-07-25 07:31:45.003134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.668 [2024-07-25 07:31:45.003153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.012544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.012578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.012596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.022142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.022175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.022194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.031553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.031586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.031605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.041019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.041051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.041070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.050441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.050469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.050485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.059998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.060030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.060048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.069657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.069690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.069708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.079034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.079066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.079084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.088647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.088676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.088691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.098137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.098171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.098189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.107514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.107543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.107581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.116749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.116781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.116800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.126084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.126117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.126135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.135518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.135560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.135575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.144978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.145010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.145029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.154487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.154530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.154545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.163933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.163965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.163984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.173372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.173400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.173415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.182893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.182925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.182944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.669 [2024-07-25 07:31:45.192326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.669 [2024-07-25 07:31:45.192358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.669 [2024-07-25 07:31:45.192374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.201822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.201854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.201873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.211556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.211597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.211615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.221404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.221431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.221446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.230925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.230956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.230974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.240449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.240478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.240494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.249958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.249990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.250009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.259533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.259560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.259576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.269058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.269091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.269109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.278509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.278536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.278569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.287933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.287966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.287984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.297419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.297447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.297463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.306910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.306942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.306960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.316431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.316459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.316474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.325788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.325837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.335205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.335237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.335265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.344888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.344919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.344937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.354442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.354469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.354490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.363906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.363938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.363956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.373411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.373441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.373457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.382950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.382983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.383002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.392490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.392537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.392553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.402146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.402178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.402196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.411655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.411686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.411704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.421161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.421194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.928 [2024-07-25 07:31:45.421212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.928 [2024-07-25 07:31:45.430592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.928 [2024-07-25 07:31:45.430624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.929 [2024-07-25 07:31:45.430643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.929 [2024-07-25 07:31:45.440161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.929 [2024-07-25 07:31:45.440192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.929 [2024-07-25 07:31:45.440211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.929 [2024-07-25 07:31:45.449618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:12.929 [2024-07-25 07:31:45.449649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.929 [2024-07-25 07:31:45.449668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.459218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.459255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.459274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.468693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.468726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.468744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.478101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.478133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.478151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.487601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.487633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.487652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.497143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.497175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.497194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.506652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.506684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.506702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.516108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.516140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.516164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.525417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.525446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.525463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.534643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.187 [2024-07-25 07:31:45.534674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.187 [2024-07-25 07:31:45.534692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.187 [2024-07-25 07:31:45.544220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.544260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.544280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.553695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.553726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.553744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.562952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.562995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.572276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.572322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.572338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.581667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.581698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.581717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.591075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.591107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.591125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.600416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.600448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.600465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.609758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.609790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.609809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.619188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.619219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.619237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.628607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.628642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.628662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.638139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.638173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.638193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.647395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.647424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.647440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.656752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.656786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.656804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.666715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.666749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.666768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.676551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.676581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.676615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.686331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.686359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.686375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.696012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.696045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.696064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.705618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.705650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.705669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.188 [2024-07-25 07:31:45.715327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.188 [2024-07-25 07:31:45.715356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.188 [2024-07-25 07:31:45.715373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.446 [2024-07-25 07:31:45.725002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.446 [2024-07-25 07:31:45.725034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.446 [2024-07-25 07:31:45.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.446 [2024-07-25 07:31:45.734473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.446 [2024-07-25 07:31:45.734502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.446 [2024-07-25 07:31:45.734518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.446 [2024-07-25 07:31:45.744019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.446 [2024-07-25 07:31:45.744052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.446 [2024-07-25 07:31:45.744070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.753441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.753475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.753493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.762865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.762898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.762922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.772312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.772341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.772357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.782051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.782084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.782103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.791692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.791724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.800835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.800867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.800886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.810157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.810185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.810201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.819434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.819461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.819477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.828661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.828688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.828703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.837815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.837843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.837858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.847042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.847079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.847098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.856203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.856235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.856264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.865672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.865704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.865722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.875036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.875067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.875085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.884154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.884186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.884204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.893499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.893528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.893545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.903663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.903698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.903718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.913398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.913429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.913445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.922845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.922878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.922902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.932385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.932412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.932428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.942050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.942083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.942101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.951540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.951579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.951597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.960991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.961024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.961043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.447 [2024-07-25 07:31:45.970457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.447 [2024-07-25 07:31:45.970484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.447 [2024-07-25 07:31:45.970499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:45.979916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:45.979948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:45.979966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:45.989458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:45.989500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:45.989515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:45.998799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:45.998832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:45.998850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.008348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.008412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.017853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.017886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.017904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.027414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.027456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.027472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.036844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.036876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.036895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.046295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.046338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.046353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.055797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.055829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.055847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.065362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.065391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.065407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.074995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.075027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.075045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.084443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.084471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.084487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.094107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.094140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.094158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.103783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.103816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.103834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.113193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.113224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.113252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.122668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.122700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.122718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.132208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.132239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.132270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.141668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.141699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.151008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.151057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.160401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.160429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.160445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.170090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.170123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.170147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.179492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.179520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.706 [2024-07-25 07:31:46.179550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.706 [2024-07-25 07:31:46.188940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.706 [2024-07-25 07:31:46.188971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.707 [2024-07-25 07:31:46.188989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.707 [2024-07-25 07:31:46.198424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.707 [2024-07-25 07:31:46.198451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.707 [2024-07-25 07:31:46.198466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.707 [2024-07-25 07:31:46.207914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.707 [2024-07-25 07:31:46.207946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.707 [2024-07-25 07:31:46.207964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.707 [2024-07-25 07:31:46.217438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.707 [2024-07-25 07:31:46.217466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.707 [2024-07-25 07:31:46.217482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.707 [2024-07-25 07:31:46.226575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.707 [2024-07-25 07:31:46.226607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.707 [2024-07-25 07:31:46.226625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.965 [2024-07-25 07:31:46.236272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.965 [2024-07-25 07:31:46.236318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.965 [2024-07-25 07:31:46.236334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.965 [2024-07-25 07:31:46.245858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.965 [2024-07-25 07:31:46.245890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.965 [2024-07-25 07:31:46.245908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.965 [2024-07-25 07:31:46.255456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.965 [2024-07-25 07:31:46.255490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.965 [2024-07-25 07:31:46.255507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.965 [2024-07-25 07:31:46.264654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbd7700) 00:26:13.965 [2024-07-25 07:31:46.264686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.965 [2024-07-25 07:31:46.264704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.965 00:26:13.965 Latency(us) 00:26:13.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.965 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:13.965 nvme0n1 : 2.00 3247.58 405.95 0.00 0.00 4920.34 4441.88 11553.75 00:26:13.965 =================================================================================================================== 00:26:13.965 Total : 3247.58 405.95 0.00 0.00 4920.34 4441.88 11553.75 00:26:13.965 0 00:26:13.965 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:13.965 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:13.965 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:13.965 | .driver_specific 00:26:13.965 | .nvme_error 00:26:13.965 | .status_code 00:26:13.965 | .command_transient_transport_error' 00:26:13.965 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2569530 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2569530 ']' 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2569530 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2569530 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2569530' 00:26:14.224 killing process with pid 2569530 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2569530 00:26:14.224 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.224 00:26:14.224 Latency(us) 00:26:14.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.224 =================================================================================================================== 00:26:14.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.224 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2569530 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2569943 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2569943 /var/tmp/bperf.sock 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2569943 ']' 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:14.483 07:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.483 [2024-07-25 07:31:46.879654] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:14.483 [2024-07-25 07:31:46.879732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569943 ] 00:26:14.483 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.483 [2024-07-25 07:31:46.940523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.740 [2024-07-25 07:31:47.057606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.740 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.740 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:14.740 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.740 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.998 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.998 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.998 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.998 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.998 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.998 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.563 nvme0n1 00:26:15.563 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:15.563 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.563 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.563 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.563 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:15.563 07:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.845 Running I/O for 2 seconds... 00:26:15.845 [2024-07-25 07:31:48.133505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ed920 00:26:15.845 [2024-07-25 07:31:48.134785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.134830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.147209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fef90 00:26:15.845 [2024-07-25 07:31:48.148527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.148573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.160743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ee190 00:26:15.845 [2024-07-25 07:31:48.162235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.162289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.174325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fe720 00:26:15.845 [2024-07-25 07:31:48.175979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.176013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.185172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0ff8 00:26:15.845 [2024-07-25 07:31:48.185976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.186008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.199740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f96f8 00:26:15.845 [2024-07-25 07:31:48.201616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.201660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.211697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eff18 00:26:15.845 [2024-07-25 07:31:48.213016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.213048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.224810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e7c50 00:26:15.845 [2024-07-25 07:31:48.225958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.225990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.237738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f3e60 00:26:15.845 [2024-07-25 07:31:48.239217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.239256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.250490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ef6a8 00:26:15.845 [2024-07-25 07:31:48.251970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.252002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.263237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fd640 00:26:15.845 [2024-07-25 07:31:48.264715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.264747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.276322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ed920 00:26:15.845 [2024-07-25 07:31:48.277951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.277983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.286813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f6458 00:26:15.845 [2024-07-25 07:31:48.287744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.287774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.299425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fb8b8 00:26:15.845 [2024-07-25 07:31:48.300440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.300468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.312117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190de470 00:26:15.845 [2024-07-25 07:31:48.313053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.845 [2024-07-25 07:31:48.313083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.845 [2024-07-25 07:31:48.324804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f8a50 00:26:15.845 [2024-07-25 07:31:48.325726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.846 [2024-07-25 07:31:48.325756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.846 [2024-07-25 07:31:48.337908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ecc78 00:26:15.846 [2024-07-25 07:31:48.339013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.846 [2024-07-25 07:31:48.339044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.846 [2024-07-25 07:31:48.349970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f96f8 00:26:15.846 [2024-07-25 07:31:48.351006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.846 [2024-07-25 07:31:48.351040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.363034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7da8 00:26:16.115 [2024-07-25 07:31:48.364263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.364294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.375972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e27f0 00:26:16.115 [2024-07-25 07:31:48.377444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.377474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.389455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7538 00:26:16.115 [2024-07-25 07:31:48.391119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.391151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.401540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e3060 00:26:16.115 [2024-07-25 07:31:48.403156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.403187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.414859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f96f8 00:26:16.115 [2024-07-25 07:31:48.416626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.416670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.428207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f4f40 00:26:16.115 [2024-07-25 07:31:48.430130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.430161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.439986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ee190 00:26:16.115 [2024-07-25 07:31:48.441518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.441551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.452576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ec408 00:26:16.115 [2024-07-25 07:31:48.454039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.454070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.465300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f3a28 00:26:16.115 [2024-07-25 07:31:48.466736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.466766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.479537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f81e0 00:26:16.115 [2024-07-25 07:31:48.481660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.481691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.488544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e23b8 00:26:16.115 [2024-07-25 07:31:48.489562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.489593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.500600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7da8 00:26:16.115 [2024-07-25 07:31:48.501600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.501630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.514806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f1868 00:26:16.115 [2024-07-25 07:31:48.515901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.515932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.527949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e0ea0 00:26:16.115 [2024-07-25 07:31:48.529213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.529250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.541237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0788 00:26:16.115 [2024-07-25 07:31:48.542726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.542757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.553203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f35f0 00:26:16.115 [2024-07-25 07:31:48.554628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.554659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.566503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e1710 00:26:16.115 [2024-07-25 07:31:48.568107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.568137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.578394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e4de8 00:26:16.115 [2024-07-25 07:31:48.579579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.579610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.591922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ea680 00:26:16.115 [2024-07-25 07:31:48.593540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.115 [2024-07-25 07:31:48.593568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.115 [2024-07-25 07:31:48.603267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f4b08 00:26:16.115 [2024-07-25 07:31:48.604534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.116 [2024-07-25 07:31:48.604577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.116 [2024-07-25 07:31:48.615775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fc128 00:26:16.116 [2024-07-25 07:31:48.617032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.116 [2024-07-25 07:31:48.617062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.116 [2024-07-25 07:31:48.628472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fa7d8 00:26:16.116 [2024-07-25 07:31:48.629722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.116 [2024-07-25 07:31:48.629751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.116 [2024-07-25 07:31:48.641029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f96f8 00:26:16.116 [2024-07-25 07:31:48.642311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.116 [2024-07-25 07:31:48.642339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.654086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190df118 00:26:16.374 [2024-07-25 07:31:48.655547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.655574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.667489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f9f68 00:26:16.374 [2024-07-25 07:31:48.669115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.669146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.679492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f4298 00:26:16.374 [2024-07-25 07:31:48.681092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.681123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.691383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f3e60 00:26:16.374 [2024-07-25 07:31:48.692553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.692583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.704277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e3498 00:26:16.374 [2024-07-25 07:31:48.705203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.705234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.718793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e1710 00:26:16.374 [2024-07-25 07:31:48.720732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.720763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.730723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e0a68 00:26:16.374 [2024-07-25 07:31:48.732171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.732201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.743325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fc128 00:26:16.374 [2024-07-25 07:31:48.744749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.744779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.755079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ec408 00:26:16.374 [2024-07-25 07:31:48.756503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.756530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.768366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e5ec8 00:26:16.374 [2024-07-25 07:31:48.769927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.769963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.781622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e7818 00:26:16.374 [2024-07-25 07:31:48.783435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.783461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.793555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fe2e8 00:26:16.374 [2024-07-25 07:31:48.794788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.794819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.806419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eea00 00:26:16.374 [2024-07-25 07:31:48.807599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.807631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.819366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fa3a0 00:26:16.374 [2024-07-25 07:31:48.820766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.820796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.374 [2024-07-25 07:31:48.832006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e0630 00:26:16.374 [2024-07-25 07:31:48.833535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.374 [2024-07-25 07:31:48.833562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.375 [2024-07-25 07:31:48.844754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fb480 00:26:16.375 [2024-07-25 07:31:48.846183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.375 [2024-07-25 07:31:48.846214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.375 [2024-07-25 07:31:48.857480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190dfdc0 00:26:16.375 [2024-07-25 07:31:48.858887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.375 [2024-07-25 07:31:48.858918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.375 [2024-07-25 07:31:48.869869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f5378 00:26:16.375 [2024-07-25 07:31:48.871346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.375 [2024-07-25 07:31:48.871373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.375 [2024-07-25 07:31:48.882633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e1f80 00:26:16.375 [2024-07-25 07:31:48.884115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.375 [2024-07-25 07:31:48.884142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:16.375 [2024-07-25 07:31:48.892524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e4de8 00:26:16.375 [2024-07-25 07:31:48.893375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.375 [2024-07-25 07:31:48.893402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.904570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fe2e8 00:26:16.633 [2024-07-25 07:31:48.905555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.905583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.916972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fe720 00:26:16.633 [2024-07-25 07:31:48.918054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.918081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.929302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e9e10 00:26:16.633 [2024-07-25 07:31:48.930600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.930627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.941127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7da8 00:26:16.633 [2024-07-25 07:31:48.942424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.942452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.952831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e99d8 00:26:16.633 [2024-07-25 07:31:48.954118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.954146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.964641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190df988 00:26:16.633 [2024-07-25 07:31:48.966022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.966050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.976419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e38d0 00:26:16.633 [2024-07-25 07:31:48.977804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.977832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.988136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ea680 00:26:16.633 [2024-07-25 07:31:48.989430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:48.989458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:48.999849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fac10 00:26:16.633 [2024-07-25 07:31:49.001137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.001164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:49.011642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190de038 00:26:16.633 [2024-07-25 07:31:49.012934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.012962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:49.023378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e7c50 00:26:16.633 [2024-07-25 07:31:49.024734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:49.035082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0788 00:26:16.633 [2024-07-25 07:31:49.036358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.036385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:49.046894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f6020 00:26:16.633 [2024-07-25 07:31:49.048252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.048279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:49.058668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f46d0 00:26:16.633 [2024-07-25 07:31:49.060037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.060064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.633 [2024-07-25 07:31:49.070562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ebb98 00:26:16.633 [2024-07-25 07:31:49.071885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.633 [2024-07-25 07:31:49.071913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.082343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fd208 00:26:16.634 [2024-07-25 07:31:49.083635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.094070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f92c0 00:26:16.634 [2024-07-25 07:31:49.095389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.095422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.105806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e1f80 00:26:16.634 [2024-07-25 07:31:49.107158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.117519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ff3c8 00:26:16.634 [2024-07-25 07:31:49.118947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.118975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.129433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e8088 00:26:16.634 [2024-07-25 07:31:49.130792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.130820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.141228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7100 00:26:16.634 [2024-07-25 07:31:49.142709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.142737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.634 [2024-07-25 07:31:49.153332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e95a0 00:26:16.634 [2024-07-25 07:31:49.154697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.634 [2024-07-25 07:31:49.154725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.892 [2024-07-25 07:31:49.165114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190df550 00:26:16.892 [2024-07-25 07:31:49.166457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.892 [2024-07-25 07:31:49.166484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.892 [2024-07-25 07:31:49.177391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f6cc8 00:26:16.892 [2024-07-25 07:31:49.178831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.892 [2024-07-25 07:31:49.178858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.892 [2024-07-25 07:31:49.186971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fa7d8 00:26:16.892 [2024-07-25 07:31:49.187900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.892 [2024-07-25 07:31:49.187928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:16.892 [2024-07-25 07:31:49.198716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e3d08 00:26:16.892 [2024-07-25 07:31:49.199557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.892 [2024-07-25 07:31:49.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:16.892 [2024-07-25 07:31:49.210672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fa3a0 00:26:16.892 [2024-07-25 07:31:49.211614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.892 [2024-07-25 07:31:49.211640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:16.892 [2024-07-25 07:31:49.222461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e0630 00:26:16.892 [2024-07-25 07:31:49.223420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.892 [2024-07-25 07:31:49.223447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.234268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f8a50 00:26:16.893 [2024-07-25 07:31:49.235294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.235322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.246124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190df550 00:26:16.893 [2024-07-25 07:31:49.247094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.247121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.257863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e95a0 00:26:16.893 [2024-07-25 07:31:49.258959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.269619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ed4e8 00:26:16.893 [2024-07-25 07:31:49.270591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.270619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.281288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0350 00:26:16.893 [2024-07-25 07:31:49.282253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.282281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.293070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f4298 00:26:16.893 [2024-07-25 07:31:49.294050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.294078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.304879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190dece0 00:26:16.893 [2024-07-25 07:31:49.305927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.305955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.316767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e3060 00:26:16.893 [2024-07-25 07:31:49.317768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.317795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.328542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f9b30 00:26:16.893 [2024-07-25 07:31:49.329576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.329603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.340229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fdeb0 00:26:16.893 [2024-07-25 07:31:49.341376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.341404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.352215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f2948 00:26:16.893 [2024-07-25 07:31:49.353212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.353262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.364045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ecc78 00:26:16.893 [2024-07-25 07:31:49.365100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.365128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.375921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f31b8 00:26:16.893 [2024-07-25 07:31:49.376927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.376956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.387711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eaab8 00:26:16.893 [2024-07-25 07:31:49.388731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.388767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.399521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f6cc8 00:26:16.893 [2024-07-25 07:31:49.400578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.400607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.893 [2024-07-25 07:31:49.411275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e0a68 00:26:16.893 [2024-07-25 07:31:49.412295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.893 [2024-07-25 07:31:49.412323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.151 [2024-07-25 07:31:49.424570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fc128 00:26:17.152 [2024-07-25 07:31:49.426200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.426251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.435552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e5658 00:26:17.152 [2024-07-25 07:31:49.436680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.447149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ddc00 00:26:17.152 [2024-07-25 07:31:49.448273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.448300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.459044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eaef0 00:26:17.152 [2024-07-25 07:31:49.460198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.460226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.471021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eff18 00:26:17.152 [2024-07-25 07:31:49.472185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.472214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.482917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ed920 00:26:17.152 [2024-07-25 07:31:49.484104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.484132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.494680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e01f8 00:26:17.152 [2024-07-25 07:31:49.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.495888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.506457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190df118 00:26:17.152 [2024-07-25 07:31:49.507616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.507643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.518145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fda78 00:26:17.152 [2024-07-25 07:31:49.519320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.519348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.529937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0ff8 00:26:17.152 [2024-07-25 07:31:49.531138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.531166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.541868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e1710 00:26:17.152 [2024-07-25 07:31:49.542974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.543001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.553620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e6b70 00:26:17.152 [2024-07-25 07:31:49.554754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.554782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.565407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f3a28 00:26:17.152 [2024-07-25 07:31:49.566589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.566632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.577213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e4140 00:26:17.152 [2024-07-25 07:31:49.578363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.578391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.588940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fb048 00:26:17.152 [2024-07-25 07:31:49.590124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.590150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.600774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eea00 00:26:17.152 [2024-07-25 07:31:49.601973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.602001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.612616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e8d30 00:26:17.152 [2024-07-25 07:31:49.613749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.613776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.624366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ea680 00:26:17.152 [2024-07-25 07:31:49.625529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.625557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.636071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fac10 00:26:17.152 [2024-07-25 07:31:49.637178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.637206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.647887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190de038 00:26:17.152 [2024-07-25 07:31:49.649031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.649059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.659678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e7c50 00:26:17.152 [2024-07-25 07:31:49.660854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.660881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.152 [2024-07-25 07:31:49.671461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0bc0 00:26:17.152 [2024-07-25 07:31:49.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.152 [2024-07-25 07:31:49.672639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.684808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f5378 00:26:17.411 [2024-07-25 07:31:49.686482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.686509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.695661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ee190 00:26:17.411 [2024-07-25 07:31:49.696964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.696998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.707325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f8618 00:26:17.411 [2024-07-25 07:31:49.708573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.708616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.719111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eee38 00:26:17.411 [2024-07-25 07:31:49.720441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.730972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e5220 00:26:17.411 [2024-07-25 07:31:49.732292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.732319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.742840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f35f0 00:26:17.411 [2024-07-25 07:31:49.744137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.744165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.754717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ed0b0 00:26:17.411 [2024-07-25 07:31:49.756036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.756063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.766440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f2d80 00:26:17.411 [2024-07-25 07:31:49.767767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.767794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.778256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f8e88 00:26:17.411 [2024-07-25 07:31:49.779505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.779532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.790014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f4298 00:26:17.411 [2024-07-25 07:31:49.791259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.791287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.801870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190dece0 00:26:17.411 [2024-07-25 07:31:49.803142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.803170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.813763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e3060 00:26:17.411 [2024-07-25 07:31:49.815021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.815048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.825794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f9b30 00:26:17.411 [2024-07-25 07:31:49.827192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.827222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.840238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7100 00:26:17.411 [2024-07-25 07:31:49.842348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.842374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.849420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f35f0 00:26:17.411 [2024-07-25 07:31:49.850303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.850329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.863079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f6458 00:26:17.411 [2024-07-25 07:31:49.864165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.864196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.876215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e2c28 00:26:17.411 [2024-07-25 07:31:49.877434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.877461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.889535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f7100 00:26:17.411 [2024-07-25 07:31:49.890922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.890953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.902786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e3d08 00:26:17.411 [2024-07-25 07:31:49.904339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.904366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.916106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f3e60 00:26:17.411 [2024-07-25 07:31:49.917893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.917924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.411 [2024-07-25 07:31:49.926955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ed0b0 00:26:17.411 [2024-07-25 07:31:49.927841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.411 [2024-07-25 07:31:49.927871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:49.940419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190ddc00 00:26:17.670 [2024-07-25 07:31:49.941466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:49.941494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:49.953393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190feb58 00:26:17.670 [2024-07-25 07:31:49.954765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:49.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:49.965971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f5be8 00:26:17.670 [2024-07-25 07:31:49.967436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:49.967462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:49.977811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f3a28 00:26:17.670 [2024-07-25 07:31:49.979160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:49.979190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:49.991141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e9e10 00:26:17.670 [2024-07-25 07:31:49.992651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:49.992681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.002862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e4140 00:26:17.670 [2024-07-25 07:31:50.003811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.003842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.014366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190fbcf0 00:26:17.670 [2024-07-25 07:31:50.015284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.015314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.029359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e01f8 00:26:17.670 [2024-07-25 07:31:50.031092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.031131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.042834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f8e88 00:26:17.670 [2024-07-25 07:31:50.044709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.044742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.056180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190eaab8 00:26:17.670 [2024-07-25 07:31:50.058239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.058292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.069480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f0bc0 00:26:17.670 [2024-07-25 07:31:50.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.071761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.078550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190f2d80 00:26:17.670 [2024-07-25 07:31:50.079660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.079691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.090642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e9168 00:26:17.670 [2024-07-25 07:31:50.091655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.091687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.104794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e5658 00:26:17.670 [2024-07-25 07:31:50.106041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.106072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.670 [2024-07-25 07:31:50.117892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d020) with pdu=0x2000190e12d8 00:26:17.670 [2024-07-25 07:31:50.119251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.670 [2024-07-25 07:31:50.119296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.670 00:26:17.670 Latency(us) 00:26:17.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.670 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:17.670 nvme0n1 : 2.01 20809.34 81.29 0.00 0.00 6144.42 2463.67 15437.37 00:26:17.670 =================================================================================================================== 00:26:17.670 Total : 20809.34 81.29 0.00 0.00 6144.42 2463.67 15437.37 00:26:17.670 0 00:26:17.670 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:17.670 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:17.670 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:17.670 | .driver_specific 00:26:17.670 | .nvme_error 00:26:17.670 | .status_code 00:26:17.670 | .command_transient_transport_error' 00:26:17.670 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2569943 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2569943 ']' 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2569943 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2569943 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2569943' 00:26:17.928 killing process with pid 2569943 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2569943 00:26:17.928 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.928 00:26:17.928 Latency(us) 00:26:17.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.928 =================================================================================================================== 00:26:17.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.928 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2569943 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2570463 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2570463 /var/tmp/bperf.sock 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2570463 ']' 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.186 07:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.444 [2024-07-25 07:31:50.746378] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:18.444 [2024-07-25 07:31:50.746455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570463 ] 00:26:18.444 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.444 Zero copy mechanism will not be used. 00:26:18.444 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.444 [2024-07-25 07:31:50.807696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.444 [2024-07-25 07:31:50.924915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.703 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.703 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:18.703 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:18.703 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:18.961 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:18.961 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.961 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.961 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.961 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.961 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.219 nvme0n1 00:26:19.219 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:19.219 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.219 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.219 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.219 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:19.219 07:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.219 Zero copy mechanism will not be used. 00:26:19.219 Running I/O for 2 seconds... 00:26:19.477 [2024-07-25 07:31:51.765059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.765489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.765543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.777656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.778045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.791311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.791604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.791638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.805205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.805612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.805647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.818968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.819384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.819413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.832942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.833350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.833378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.845819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.846190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.846238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.858784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.859186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.859219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.871768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.872052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.872083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.884591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.884944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.477 [2024-07-25 07:31:51.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.477 [2024-07-25 07:31:51.896990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.477 [2024-07-25 07:31:51.897378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.897420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.909393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.909659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.909691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.922319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.922668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.922712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.935797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.935993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.948995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.949352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.949396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.962011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.962411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.962439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.974914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.975282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.975325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:51.987094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:51.987483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:51.987517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.478 [2024-07-25 07:31:52.000032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.478 [2024-07-25 07:31:52.000403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.478 [2024-07-25 07:31:52.000447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.011901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.012261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.024711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.025102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.025150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.037847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.038216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.038268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.050003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.050371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.050414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.062577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.062965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.062993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.075003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.075369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.075412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.087731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.088100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.088146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.100200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.100601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.100648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.111999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.112372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.112416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.124198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.124536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.124580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.136510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.136914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.149117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.149500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.149543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.161914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.162140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.162182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.173981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.174339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.174368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.187563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.736 [2024-07-25 07:31:52.187910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.736 [2024-07-25 07:31:52.187938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.736 [2024-07-25 07:31:52.199448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.737 [2024-07-25 07:31:52.199794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.737 [2024-07-25 07:31:52.199822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.737 [2024-07-25 07:31:52.212191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.737 [2024-07-25 07:31:52.212575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.737 [2024-07-25 07:31:52.212617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.737 [2024-07-25 07:31:52.224855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.737 [2024-07-25 07:31:52.225206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.737 [2024-07-25 07:31:52.225255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.737 [2024-07-25 07:31:52.236553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.737 [2024-07-25 07:31:52.236944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.737 [2024-07-25 07:31:52.236971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.737 [2024-07-25 07:31:52.248573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.737 [2024-07-25 07:31:52.248913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.737 [2024-07-25 07:31:52.248941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.737 [2024-07-25 07:31:52.260119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.737 [2024-07-25 07:31:52.260390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.737 [2024-07-25 07:31:52.260419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.272561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.272953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.272981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.285404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.285793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.285823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.298047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.298430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.298458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.310752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.311099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.311131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.322658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.323031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.323059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.335170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.335570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.335598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.348146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.348538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.360965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.361337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.361381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.373986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.374378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.374406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.386283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.386683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.386716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.399580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.399927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.399972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.412644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.412969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.412996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.423973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.424345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.424374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.436126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.436497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.436542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.448650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.448990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.449018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.461357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.461568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.461597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.473483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.473682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.473710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.485975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.486358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.486389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.498345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.498698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.498741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.510593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:19.995 [2024-07-25 07:31:52.510966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.995 [2024-07-25 07:31:52.511008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.995 [2024-07-25 07:31:52.523313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.523683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.523717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.535439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.535785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.547424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.547786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.547832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.559697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.560058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.560101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.572032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.572436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.572464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.585161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.585551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.585598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.597547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.597953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.610800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.611174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.611202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.623623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.623965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.623992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.635898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.636281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.636327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.648579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.648942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.648990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.660885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.661259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.661294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.672949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.673311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.673341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.685104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.685484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.696183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.254 [2024-07-25 07:31:52.696565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.254 [2024-07-25 07:31:52.696610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.254 [2024-07-25 07:31:52.708810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.255 [2024-07-25 07:31:52.709233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.255 [2024-07-25 07:31:52.709272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.255 [2024-07-25 07:31:52.721197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.255 [2024-07-25 07:31:52.721568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.255 [2024-07-25 07:31:52.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.255 [2024-07-25 07:31:52.733420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.255 [2024-07-25 07:31:52.733796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.255 [2024-07-25 07:31:52.733827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.255 [2024-07-25 07:31:52.745611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.255 [2024-07-25 07:31:52.745962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.255 [2024-07-25 07:31:52.745992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.255 [2024-07-25 07:31:52.758149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.255 [2024-07-25 07:31:52.758496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.255 [2024-07-25 07:31:52.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.255 [2024-07-25 07:31:52.770152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.255 [2024-07-25 07:31:52.770506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.255 [2024-07-25 07:31:52.770550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.255 [2024-07-25 07:31:52.782773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.783122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.783152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.794817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.795162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.795191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.808634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.809033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.821583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.821925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.821955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.834097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.834468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.834498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.845475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.845853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.845888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.857655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.858008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.858037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.869674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.870031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.870073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.882344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.882712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.882755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.894020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.894212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.894248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.906527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.906894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.906923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.918445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.918801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.918830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.930364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.930725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.930771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.942397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.942755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.942784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.954272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.954641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.954685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.965783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.966018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.966047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.978343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.513 [2024-07-25 07:31:52.978675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.513 [2024-07-25 07:31:52.978705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.513 [2024-07-25 07:31:52.990152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.514 [2024-07-25 07:31:52.990500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.514 [2024-07-25 07:31:52.990530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.514 [2024-07-25 07:31:53.001443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.514 [2024-07-25 07:31:53.001810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.514 [2024-07-25 07:31:53.001853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.514 [2024-07-25 07:31:53.014210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.514 [2024-07-25 07:31:53.014580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.514 [2024-07-25 07:31:53.014625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.514 [2024-07-25 07:31:53.026333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.514 [2024-07-25 07:31:53.026692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.514 [2024-07-25 07:31:53.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.514 [2024-07-25 07:31:53.038682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.514 [2024-07-25 07:31:53.039031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.514 [2024-07-25 07:31:53.039061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.051144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.772 [2024-07-25 07:31:53.051519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.772 [2024-07-25 07:31:53.051549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.062633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.772 [2024-07-25 07:31:53.062993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.772 [2024-07-25 07:31:53.063036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.074900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.772 [2024-07-25 07:31:53.075301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.772 [2024-07-25 07:31:53.075331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.087413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.772 [2024-07-25 07:31:53.087770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.772 [2024-07-25 07:31:53.087815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.100291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.772 [2024-07-25 07:31:53.100651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.772 [2024-07-25 07:31:53.100695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.112426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.772 [2024-07-25 07:31:53.112768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.772 [2024-07-25 07:31:53.112797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.772 [2024-07-25 07:31:53.124894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.125320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.125348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.136132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.136483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.136515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.148278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.148641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.148684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.159822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.160179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.160216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.171944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.172188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.172217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.183590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.183986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.184016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.194450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.194910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.194940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.206590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.207016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.207045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.217609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.218043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.218073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.229100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.229515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.229560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.240222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.240626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.240655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.251566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.251967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.251996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.262152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.262546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.262576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.272867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.273281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.273311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.284205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.284691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.284720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.773 [2024-07-25 07:31:53.296351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:20.773 [2024-07-25 07:31:53.296736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.773 [2024-07-25 07:31:53.296766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.307886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.308301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.308332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.320101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.320559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.320590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.331933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.332390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.332421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.343169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.343647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.343693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.353744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.354090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.354127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.365387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.365863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.365893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.376994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.377455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.377485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.388344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.388851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.388880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.400034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.400480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.400529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.411405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.411833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.411863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.422422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.422859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.422889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.434195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.434585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.434615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.446783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.447191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.447220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.458450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.458892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.458922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.470549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.470999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.471029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.483020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.483437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.483467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.493537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.493897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.493926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.504922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.505274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.505304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.516011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.516363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.516392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.527695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.528143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.528172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.538932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.539306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.539336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.031 [2024-07-25 07:31:53.550332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.031 [2024-07-25 07:31:53.550699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.031 [2024-07-25 07:31:53.550729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.561625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.561951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.561981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.572504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.572867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.572895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.583932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.584355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.584385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.595303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.595690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.595717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.607064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.607611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.607640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.617994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.618501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.618532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.630315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.630695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.630724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.642087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.642469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.642499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.653011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.653340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.653375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.663805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.664254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.664297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.676029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.676372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.676402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.688160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.688572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.688601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.700105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.700596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.700625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.711638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.712015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.712043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.723434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.723851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.723879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.735421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.735824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.735852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.289 [2024-07-25 07:31:53.747152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134d360) with pdu=0x2000190fef90 00:26:21.289 [2024-07-25 07:31:53.747553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.289 [2024-07-25 07:31:53.747582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.289 00:26:21.289 Latency(us) 00:26:21.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.289 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:21.289 nvme0n1 : 2.01 2549.12 318.64 0.00 0.00 6262.88 4466.16 13883.92 00:26:21.289 =================================================================================================================== 00:26:21.289 Total : 2549.12 318.64 0.00 0.00 6262.88 4466.16 13883.92 00:26:21.289 0 00:26:21.289 07:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:21.289 07:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:21.289 07:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:21.289 07:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:21.289 | .driver_specific 00:26:21.289 | .nvme_error 00:26:21.289 | .status_code 00:26:21.289 | .command_transient_transport_error' 00:26:21.546 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:26:21.546 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2570463 00:26:21.546 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2570463 ']' 00:26:21.546 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2570463 00:26:21.546 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2570463 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2570463' 00:26:21.804 killing process with pid 2570463 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2570463 00:26:21.804 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.804 00:26:21.804 Latency(us) 00:26:21.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.804 =================================================================================================================== 00:26:21.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.804 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2570463 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2569041 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2569041 ']' 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2569041 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2569041 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2569041' 00:26:22.062 killing process with pid 2569041 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2569041 00:26:22.062 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2569041 00:26:22.320 00:26:22.320 real 0m15.555s 00:26:22.320 user 0m31.238s 00:26:22.320 sys 0m3.913s 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.320 ************************************ 00:26:22.320 END TEST nvmf_digest_error 00:26:22.320 ************************************ 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.320 rmmod nvme_tcp 00:26:22.320 rmmod nvme_fabrics 00:26:22.320 rmmod nvme_keyring 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2569041 ']' 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2569041 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2569041 ']' 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2569041 00:26:22.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2569041) - No such process 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2569041 is not found' 00:26:22.320 Process with pid 2569041 is not found 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.320 07:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.846 00:26:24.846 real 0m35.600s 00:26:24.846 user 1m2.612s 00:26:24.846 sys 0m9.769s 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.846 ************************************ 00:26:24.846 END TEST nvmf_digest 00:26:24.846 ************************************ 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.846 ************************************ 00:26:24.846 START TEST nvmf_bdevperf 00:26:24.846 ************************************ 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:24.846 * Looking for test storage... 00:26:24.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:24.846 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.847 07:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.745 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.746 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.746 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:26:26.746 00:26:26.746 --- 10.0.0.2 ping statistics --- 00:26:26.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.746 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:26.746 07:31:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:26:26.746 00:26:26.746 --- 10.0.0.1 ping statistics --- 00:26:26.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.746 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2572818 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2572818 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2572818 ']' 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.746 07:31:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.746 [2024-07-25 07:31:59.075948] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:26.746 [2024-07-25 07:31:59.076034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.746 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.746 [2024-07-25 07:31:59.138988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:26.746 [2024-07-25 07:31:59.246419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.746 [2024-07-25 07:31:59.246473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.746 [2024-07-25 07:31:59.246502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.746 [2024-07-25 07:31:59.246513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.746 [2024-07-25 07:31:59.246523] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.746 [2024-07-25 07:31:59.246640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.746 [2024-07-25 07:31:59.246670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.746 [2024-07-25 07:31:59.246673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.678 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:27.678 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:27.678 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:27.678 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.678 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 [2024-07-25 07:32:00.085682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 Malloc0 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 [2024-07-25 07:32:00.154391] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.679 { 00:26:27.679 "params": { 00:26:27.679 "name": "Nvme$subsystem", 00:26:27.679 "trtype": "$TEST_TRANSPORT", 00:26:27.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.679 "adrfam": "ipv4", 00:26:27.679 "trsvcid": "$NVMF_PORT", 00:26:27.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.679 "hdgst": ${hdgst:-false}, 00:26:27.679 "ddgst": ${ddgst:-false} 00:26:27.679 }, 00:26:27.679 "method": "bdev_nvme_attach_controller" 00:26:27.679 } 00:26:27.679 EOF 00:26:27.679 )") 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:27.679 07:32:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.679 "params": { 00:26:27.679 "name": "Nvme1", 00:26:27.679 "trtype": "tcp", 00:26:27.679 "traddr": "10.0.0.2", 00:26:27.679 "adrfam": "ipv4", 00:26:27.679 "trsvcid": "4420", 00:26:27.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.679 "hdgst": false, 00:26:27.679 "ddgst": false 00:26:27.679 }, 00:26:27.679 "method": "bdev_nvme_attach_controller" 00:26:27.679 }' 00:26:27.679 [2024-07-25 07:32:00.204523] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:27.679 [2024-07-25 07:32:00.204605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572970 ] 00:26:27.936 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.936 [2024-07-25 07:32:00.264082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.936 [2024-07-25 07:32:00.377337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.193 Running I/O for 1 seconds... 00:26:29.569 00:26:29.569 Latency(us) 00:26:29.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.569 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.569 Verification LBA range: start 0x0 length 0x4000 00:26:29.569 Nvme1n1 : 1.01 8745.23 34.16 0.00 0.00 14573.43 2123.85 13689.74 00:26:29.569 =================================================================================================================== 00:26:29.569 Total : 8745.23 34.16 0.00 0.00 14573.43 2123.85 13689.74 00:26:29.569 07:32:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2573235 00:26:29.569 07:32:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:29.569 07:32:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:29.569 07:32:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.569 { 00:26:29.569 "params": { 00:26:29.569 "name": "Nvme$subsystem", 00:26:29.569 "trtype": "$TEST_TRANSPORT", 00:26:29.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.569 "adrfam": "ipv4", 00:26:29.569 "trsvcid": "$NVMF_PORT", 00:26:29.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.569 "hdgst": ${hdgst:-false}, 00:26:29.569 "ddgst": ${ddgst:-false} 00:26:29.569 }, 00:26:29.569 "method": "bdev_nvme_attach_controller" 00:26:29.569 } 00:26:29.569 EOF 00:26:29.569 )") 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:29.569 07:32:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:29.569 "params": { 00:26:29.569 "name": "Nvme1", 00:26:29.569 "trtype": "tcp", 00:26:29.569 "traddr": "10.0.0.2", 00:26:29.569 "adrfam": "ipv4", 00:26:29.569 "trsvcid": "4420", 00:26:29.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.569 "hdgst": false, 00:26:29.569 "ddgst": false 00:26:29.569 }, 00:26:29.569 "method": "bdev_nvme_attach_controller" 00:26:29.569 }' 00:26:29.569 [2024-07-25 07:32:02.045605] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:29.569 [2024-07-25 07:32:02.045679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573235 ] 00:26:29.569 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.827 [2024-07-25 07:32:02.104955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.827 [2024-07-25 07:32:02.215702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.085 Running I/O for 15 seconds... 00:26:32.668 07:32:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2572818 00:26:32.668 07:32:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:32.668 [2024-07-25 07:32:05.012582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.668 [2024-07-25 07:32:05.012899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.668 [2024-07-25 07:32:05.012918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.012943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.012978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.012998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-25 07:32:05.013697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.013983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.013998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.669 [2024-07-25 07:32:05.014228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.669 [2024-07-25 07:32:05.014252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.014982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.014997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.670 [2024-07-25 07:32:05.015544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.670 [2024-07-25 07:32:05.015559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.015973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.015991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.671 [2024-07-25 07:32:05.016133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.671 [2024-07-25 07:32:05.016874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.671 [2024-07-25 07:32:05.016889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.016907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.672 [2024-07-25 07:32:05.016923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.016939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16627f0 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.016960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:32.672 [2024-07-25 07:32:05.016974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:32.672 [2024-07-25 07:32:05.016986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50064 len:8 PRP1 0x0 PRP2 0x0 00:26:32.672 [2024-07-25 07:32:05.017000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.017072] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16627f0 was disconnected and freed. reset controller. 00:26:32.672 [2024-07-25 07:32:05.017149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.672 [2024-07-25 07:32:05.017173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.017190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.672 [2024-07-25 07:32:05.017205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.017220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.672 [2024-07-25 07:32:05.017235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.017258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.672 [2024-07-25 07:32:05.017289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.672 [2024-07-25 07:32:05.017303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.021077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.021124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.021835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.021869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.021888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.022131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.022393] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.022417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.022435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.026051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.035391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.035865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.035897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.035916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.036156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.036471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.036497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.036512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.040101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.049311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.049781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.049824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.049842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.050101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.050366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.050389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.050403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.054013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.063375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.063817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.063849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.063867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.064113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.064370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.064396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.064411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.067997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.077337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.077774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.077805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.077823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.078063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.078319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.078343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.078359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.081945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.091281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.091704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.091735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.091753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.091993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.092238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.092272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.092287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.095878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.105217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.105673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.105704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.105722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.105963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.106207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.106230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.106262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.109852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.672 [2024-07-25 07:32:05.119204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.672 [2024-07-25 07:32:05.119654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.672 [2024-07-25 07:32:05.119686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.672 [2024-07-25 07:32:05.119704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.672 [2024-07-25 07:32:05.119943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.672 [2024-07-25 07:32:05.120189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.672 [2024-07-25 07:32:05.120213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.672 [2024-07-25 07:32:05.120228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.672 [2024-07-25 07:32:05.123827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.673 [2024-07-25 07:32:05.133167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.673 [2024-07-25 07:32:05.133653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.673 [2024-07-25 07:32:05.133696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.673 [2024-07-25 07:32:05.133712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.673 [2024-07-25 07:32:05.133970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.673 [2024-07-25 07:32:05.134215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.673 [2024-07-25 07:32:05.134239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.673 [2024-07-25 07:32:05.134266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.673 [2024-07-25 07:32:05.137859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.673 [2024-07-25 07:32:05.147196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.673 [2024-07-25 07:32:05.147637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.673 [2024-07-25 07:32:05.147668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.673 [2024-07-25 07:32:05.147686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.673 [2024-07-25 07:32:05.147926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.673 [2024-07-25 07:32:05.148170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.673 [2024-07-25 07:32:05.148194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.673 [2024-07-25 07:32:05.148210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.673 [2024-07-25 07:32:05.151832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.673 [2024-07-25 07:32:05.161185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.673 [2024-07-25 07:32:05.161613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.673 [2024-07-25 07:32:05.161646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.673 [2024-07-25 07:32:05.161664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.673 [2024-07-25 07:32:05.161904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.673 [2024-07-25 07:32:05.162149] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.673 [2024-07-25 07:32:05.162173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.673 [2024-07-25 07:32:05.162188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.673 [2024-07-25 07:32:05.165790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.673 [2024-07-25 07:32:05.175124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.673 [2024-07-25 07:32:05.175545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.673 [2024-07-25 07:32:05.175577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.673 [2024-07-25 07:32:05.175595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.673 [2024-07-25 07:32:05.175836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.673 [2024-07-25 07:32:05.176081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.673 [2024-07-25 07:32:05.176105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.673 [2024-07-25 07:32:05.176120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.673 [2024-07-25 07:32:05.179721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.673 [2024-07-25 07:32:05.189072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.673 [2024-07-25 07:32:05.189508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.673 [2024-07-25 07:32:05.189539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.673 [2024-07-25 07:32:05.189557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.673 [2024-07-25 07:32:05.189797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.673 [2024-07-25 07:32:05.190041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.673 [2024-07-25 07:32:05.190065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.673 [2024-07-25 07:32:05.190080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.673 [2024-07-25 07:32:05.193679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.932 [2024-07-25 07:32:05.202935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.932 [2024-07-25 07:32:05.203372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.932 [2024-07-25 07:32:05.203401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.932 [2024-07-25 07:32:05.203418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.932 [2024-07-25 07:32:05.203672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.932 [2024-07-25 07:32:05.203919] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.932 [2024-07-25 07:32:05.203943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.932 [2024-07-25 07:32:05.203958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.932 [2024-07-25 07:32:05.207567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.932 [2024-07-25 07:32:05.217019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.932 [2024-07-25 07:32:05.217482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.932 [2024-07-25 07:32:05.217511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.932 [2024-07-25 07:32:05.217546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.932 [2024-07-25 07:32:05.217787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.932 [2024-07-25 07:32:05.218032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.932 [2024-07-25 07:32:05.218056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.932 [2024-07-25 07:32:05.218071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.932 [2024-07-25 07:32:05.221598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.932 [2024-07-25 07:32:05.230487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.932 [2024-07-25 07:32:05.230901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.932 [2024-07-25 07:32:05.230928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.932 [2024-07-25 07:32:05.230960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.932 [2024-07-25 07:32:05.231186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.932 [2024-07-25 07:32:05.231408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.932 [2024-07-25 07:32:05.231429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.932 [2024-07-25 07:32:05.231443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.932 [2024-07-25 07:32:05.234517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.932 [2024-07-25 07:32:05.244455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.932 [2024-07-25 07:32:05.244976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.932 [2024-07-25 07:32:05.245020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.932 [2024-07-25 07:32:05.245038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.932 [2024-07-25 07:32:05.245316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.932 [2024-07-25 07:32:05.245542] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.932 [2024-07-25 07:32:05.245567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.932 [2024-07-25 07:32:05.245588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.932 [2024-07-25 07:32:05.249143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.932 [2024-07-25 07:32:05.258388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.932 [2024-07-25 07:32:05.258814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.932 [2024-07-25 07:32:05.258845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.932 [2024-07-25 07:32:05.258863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.932 [2024-07-25 07:32:05.259102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.259358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.259383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.259399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.263018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.272413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.272805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.272835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.272852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.273084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.273345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.273368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.273382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.276875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.286342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.286874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.286926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.286945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.287184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.287439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.287462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.287476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.291104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.300374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.300860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.300917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.300935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.301176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.301433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.301455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.301470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.305105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.314305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.314718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.314750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.314768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.315009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.315265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.315291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.315306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.318905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.328302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.328782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.328813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.328831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.329070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.329328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.329353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.329368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.332963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.342456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.342949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.342980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.342998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.343238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.343504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.343528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.343543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.347139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.356506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.356974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.357023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.357041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.357292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.357537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.357560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.357575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.361166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.370530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.371050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.371081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.371098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.371349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.371602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.371626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.371641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.375233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.384600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.385103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.385134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.385152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.385404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.933 [2024-07-25 07:32:05.385649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.933 [2024-07-25 07:32:05.385673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.933 [2024-07-25 07:32:05.385689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.933 [2024-07-25 07:32:05.389299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.933 [2024-07-25 07:32:05.398651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.933 [2024-07-25 07:32:05.399102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.933 [2024-07-25 07:32:05.399134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.933 [2024-07-25 07:32:05.399152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.933 [2024-07-25 07:32:05.399404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.934 [2024-07-25 07:32:05.399649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.934 [2024-07-25 07:32:05.399673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.934 [2024-07-25 07:32:05.399688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.934 [2024-07-25 07:32:05.403288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.934 [2024-07-25 07:32:05.412635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.934 [2024-07-25 07:32:05.413047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.934 [2024-07-25 07:32:05.413077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.934 [2024-07-25 07:32:05.413095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.934 [2024-07-25 07:32:05.413359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.934 [2024-07-25 07:32:05.413605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.934 [2024-07-25 07:32:05.413629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.934 [2024-07-25 07:32:05.413644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.934 [2024-07-25 07:32:05.417231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.934 [2024-07-25 07:32:05.426581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.934 [2024-07-25 07:32:05.427006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.934 [2024-07-25 07:32:05.427037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.934 [2024-07-25 07:32:05.427054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.934 [2024-07-25 07:32:05.427305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.934 [2024-07-25 07:32:05.427551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.934 [2024-07-25 07:32:05.427575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.934 [2024-07-25 07:32:05.427590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.934 [2024-07-25 07:32:05.431182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.934 [2024-07-25 07:32:05.440533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.934 [2024-07-25 07:32:05.440968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.934 [2024-07-25 07:32:05.440999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.934 [2024-07-25 07:32:05.441022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.934 [2024-07-25 07:32:05.441274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.934 [2024-07-25 07:32:05.441520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.934 [2024-07-25 07:32:05.441544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.934 [2024-07-25 07:32:05.441559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.934 [2024-07-25 07:32:05.445150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.934 [2024-07-25 07:32:05.454518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.934 [2024-07-25 07:32:05.454953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.934 [2024-07-25 07:32:05.454983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:32.934 [2024-07-25 07:32:05.455001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:32.934 [2024-07-25 07:32:05.455252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:32.934 [2024-07-25 07:32:05.455498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.934 [2024-07-25 07:32:05.455522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.934 [2024-07-25 07:32:05.455538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.934 [2024-07-25 07:32:05.459125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-25 07:32:05.468482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-25 07:32:05.468900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-25 07:32:05.468931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-25 07:32:05.468949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.192 [2024-07-25 07:32:05.469188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.469443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.469467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.469482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.473071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.482428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.482872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.482903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.482921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.483160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.483416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.483445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.483461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.487051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.496410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.496848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.496880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.496897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.497137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.497392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.497417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.497432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.501022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.510374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.510813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.510844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.510861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.511101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.511356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.511380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.511396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.515005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.524369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.524804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.524835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.524853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.525093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.525349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.525373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.525389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.528981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.538355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.538791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.538821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.538840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.539079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.539335] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.539360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.539375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.542960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.552316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.552718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.552750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.552768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.553008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.553269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.553293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.553309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.556899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.566248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.566658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.566689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.566707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.566947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.567191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.567214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.567230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.570833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.580173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.580604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.580631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.580663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.580921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.581166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.581190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-25 07:32:05.581205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-25 07:32:05.584807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-25 07:32:05.594152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-25 07:32:05.594569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-25 07:32:05.594600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-25 07:32:05.594618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.193 [2024-07-25 07:32:05.594858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.193 [2024-07-25 07:32:05.595103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-25 07:32:05.595127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.595142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.598770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.608117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.608533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.608565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.608583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.608823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.609068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.609092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.609107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.612707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.622061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.622459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.622491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.622508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.622748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.622993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.623017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.623038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.626640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.635990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.636406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.636436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.636454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.636694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.636939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.636962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.636978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.640578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.649926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.650337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.650369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.650387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.650626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.650870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.650894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.650909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.654518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.663920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.664357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.664390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.664407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.664647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.664891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.664915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.664930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.668533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.677880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.678334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.678362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.678379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.678623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.678868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.678892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.678907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.682509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.691859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.692282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.692314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.692332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.692572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.692816] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.692840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.692855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.696458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.705805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.706252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.706284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.706302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.706542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.706786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.706810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.706825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.194 [2024-07-25 07:32:05.710425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.194 [2024-07-25 07:32:05.719781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.194 [2024-07-25 07:32:05.720210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.194 [2024-07-25 07:32:05.720267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.194 [2024-07-25 07:32:05.720286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.194 [2024-07-25 07:32:05.720527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.194 [2024-07-25 07:32:05.720777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.194 [2024-07-25 07:32:05.720801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.194 [2024-07-25 07:32:05.720817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.453 [2024-07-25 07:32:05.724451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.453 [2024-07-25 07:32:05.733812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.453 [2024-07-25 07:32:05.734254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.734286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.734304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.734543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.734788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.734812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.734828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.738429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.747776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.748205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.748236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.748264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.748506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.748751] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.748775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.748790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.752388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.761734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.762290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.762322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.762340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.762580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.762824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.762848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.762864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.766471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.775612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.776001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.776032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.776050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.776301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.776546] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.776570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.776585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.780173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.789530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.789963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.789994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.790012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.790263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.790509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.790532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.790548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.794138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.803499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.803919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.803961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.803976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.804239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.804495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.804520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.804535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.808122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.817489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.817924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.817960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.817979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.818219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.818473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.818498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.818514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.822100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.831450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.831835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.831867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.831885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.832125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.832381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.832406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.832421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.836012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.845369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.845812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.845854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.845871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.846129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.846386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.846410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.846426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.850013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.859387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.859827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.454 [2024-07-25 07:32:05.859869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.454 [2024-07-25 07:32:05.859885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.454 [2024-07-25 07:32:05.860139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.454 [2024-07-25 07:32:05.860402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.454 [2024-07-25 07:32:05.860435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.454 [2024-07-25 07:32:05.860451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.454 [2024-07-25 07:32:05.864043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.454 [2024-07-25 07:32:05.873396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.454 [2024-07-25 07:32:05.873815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.873856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.873871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.874104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.874361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.874386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.874401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.877989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.887334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.887743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.887774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.887791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.888031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.888288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.888313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.888328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.891916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.901271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.901676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.901707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.901724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.901964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.902209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.902232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.902258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.905852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.915218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.915635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.915667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.915685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.915924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.916169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.916193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.916208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.919810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.929150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.929596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.929627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.929644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.929884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.930128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.930153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.930168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.933768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.943115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.943537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.943568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.943586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.943825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.944069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.944093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.944108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.947707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.957064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.957529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.957571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.957592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.957852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.958097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.958121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.958136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.961738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-25 07:32:05.971087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-25 07:32:05.971516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-25 07:32:05.971548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-25 07:32:05.971565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.455 [2024-07-25 07:32:05.971805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.455 [2024-07-25 07:32:05.972050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.455 [2024-07-25 07:32:05.972074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.455 [2024-07-25 07:32:05.972088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-25 07:32:05.975697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:05.985075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:05.985528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:05.985560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:05.985577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:05.985817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:05.986062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:05.986086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:05.986101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:05.989705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:05.999076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:05.999484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:05.999516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:05.999535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:05.999774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.000018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.000048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.000064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.003674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.013026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.013486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.013517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.013535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.013775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.014020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.014043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.014059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.017686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.027044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.027504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.027532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.027549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.027805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.028050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.028074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.028089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.031692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.041230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.041657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.041689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.041707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.041948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.042193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.042217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.042232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.045835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.055195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.055631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.055662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.055680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.055920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.056165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.056189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.056204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.059812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.069149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.069606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.069637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.069655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.069895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.070140] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.070164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.070179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.073781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.083139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.083583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.083615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.083633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.083873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.084118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.084142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.084157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.087758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.097101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.097529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.097561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.097579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.715 [2024-07-25 07:32:06.097828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.715 [2024-07-25 07:32:06.098073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.715 [2024-07-25 07:32:06.098097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.715 [2024-07-25 07:32:06.098112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.715 [2024-07-25 07:32:06.101716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.715 [2024-07-25 07:32:06.111067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.715 [2024-07-25 07:32:06.111542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.715 [2024-07-25 07:32:06.111569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.715 [2024-07-25 07:32:06.111599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.111852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.112097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.112121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.112136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.115762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.125112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.125558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.125589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.125607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.125846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.126091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.126114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.126130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.129733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.139088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.139544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.139587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.139603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.139876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.140121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.140145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.140165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.143773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.153118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.153564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.153595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.153613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.153853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.154097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.154121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.154136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.157736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.167076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.167525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.167556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.167574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.167813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.168058] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.168082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.168097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.171700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.181048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.181500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.181530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.181548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.181787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.182031] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.182054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.182070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.185672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.195022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.195466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.195497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.195515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.195755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.196000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.196024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.196039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.199643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.208988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.209377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.209408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.209426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.209666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.209910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.209934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.209949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.213549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.222932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.223378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.223410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.223428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.223668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.223912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.223935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.223951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.227548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.716 [2024-07-25 07:32:06.236934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.716 [2024-07-25 07:32:06.237373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.716 [2024-07-25 07:32:06.237409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.716 [2024-07-25 07:32:06.237439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.716 [2024-07-25 07:32:06.237723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.716 [2024-07-25 07:32:06.237996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.716 [2024-07-25 07:32:06.238024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.716 [2024-07-25 07:32:06.238050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.716 [2024-07-25 07:32:06.241699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.976 [2024-07-25 07:32:06.250862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.976 [2024-07-25 07:32:06.251331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.976 [2024-07-25 07:32:06.251367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.976 [2024-07-25 07:32:06.251397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.976 [2024-07-25 07:32:06.251681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.976 [2024-07-25 07:32:06.251947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.976 [2024-07-25 07:32:06.251974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.976 [2024-07-25 07:32:06.251999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.976 [2024-07-25 07:32:06.255648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.976 [2024-07-25 07:32:06.264809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.976 [2024-07-25 07:32:06.265258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.976 [2024-07-25 07:32:06.265293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.976 [2024-07-25 07:32:06.265324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.976 [2024-07-25 07:32:06.265610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.976 [2024-07-25 07:32:06.265878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.976 [2024-07-25 07:32:06.265905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.976 [2024-07-25 07:32:06.265930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.976 [2024-07-25 07:32:06.269577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.976 [2024-07-25 07:32:06.278748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.976 [2024-07-25 07:32:06.279234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.976 [2024-07-25 07:32:06.279277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.976 [2024-07-25 07:32:06.279308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.976 [2024-07-25 07:32:06.279593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.976 [2024-07-25 07:32:06.279863] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.976 [2024-07-25 07:32:06.279890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.976 [2024-07-25 07:32:06.279915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.976 [2024-07-25 07:32:06.283567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.976 [2024-07-25 07:32:06.292725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.976 [2024-07-25 07:32:06.293163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.976 [2024-07-25 07:32:06.293198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.976 [2024-07-25 07:32:06.293227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.976 [2024-07-25 07:32:06.293523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.976 [2024-07-25 07:32:06.293789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.976 [2024-07-25 07:32:06.293816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.976 [2024-07-25 07:32:06.293841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.976 [2024-07-25 07:32:06.297484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.976 [2024-07-25 07:32:06.306638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.976 [2024-07-25 07:32:06.307120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.976 [2024-07-25 07:32:06.307155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.976 [2024-07-25 07:32:06.307184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.976 [2024-07-25 07:32:06.307486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.976 [2024-07-25 07:32:06.307755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.976 [2024-07-25 07:32:06.307782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.976 [2024-07-25 07:32:06.307807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.976 [2024-07-25 07:32:06.311452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.976 [2024-07-25 07:32:06.320621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.976 [2024-07-25 07:32:06.321036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.976 [2024-07-25 07:32:06.321072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.976 [2024-07-25 07:32:06.321101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.976 [2024-07-25 07:32:06.321398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.976 [2024-07-25 07:32:06.321666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.321693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.321718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.325367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.334530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.334993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.335029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.335066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.335358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.335625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.335652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.335677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.339329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.348500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.348970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.349005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.349035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.349328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.349596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.349623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.349648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.353304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.362496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.362956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.362992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.363022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.363329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.363599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.363626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.363651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.367380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.376544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.377003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.377037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.377066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.377359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.377634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.377662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.377688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.381343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.390504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.390969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.391004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.391034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.391330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.391599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.391626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.391650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.395303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.404468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.404944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.404979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.405009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.405307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.405576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.405602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.405628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.409277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.418469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.418894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.418929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.418959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.419252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.419520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.419548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.419573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.423219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.432447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.432930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.432972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.433001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.433298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.433566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.433593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.433618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.437278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.446442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.446920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.446955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.446985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.447277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.447545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.977 [2024-07-25 07:32:06.447572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.977 [2024-07-25 07:32:06.447598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.977 [2024-07-25 07:32:06.451239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.977 [2024-07-25 07:32:06.460419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.977 [2024-07-25 07:32:06.460882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.977 [2024-07-25 07:32:06.460917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.977 [2024-07-25 07:32:06.460947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.977 [2024-07-25 07:32:06.461229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.977 [2024-07-25 07:32:06.461520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.978 [2024-07-25 07:32:06.461542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.978 [2024-07-25 07:32:06.461562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.978 [2024-07-25 07:32:06.464675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.978 [2024-07-25 07:32:06.473798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.978 [2024-07-25 07:32:06.474282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.978 [2024-07-25 07:32:06.474315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.978 [2024-07-25 07:32:06.474348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.978 [2024-07-25 07:32:06.474637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.978 [2024-07-25 07:32:06.474857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.978 [2024-07-25 07:32:06.474878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.978 [2024-07-25 07:32:06.474899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.978 [2024-07-25 07:32:06.478033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.978 [2024-07-25 07:32:06.487177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.978 [2024-07-25 07:32:06.487640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.978 [2024-07-25 07:32:06.487671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.978 [2024-07-25 07:32:06.487698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.978 [2024-07-25 07:32:06.487987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.978 [2024-07-25 07:32:06.488200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.978 [2024-07-25 07:32:06.488235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.978 [2024-07-25 07:32:06.488266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.978 [2024-07-25 07:32:06.491414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.978 [2024-07-25 07:32:06.500705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.978 [2024-07-25 07:32:06.501203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.978 [2024-07-25 07:32:06.501257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:33.978 [2024-07-25 07:32:06.501286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:33.978 [2024-07-25 07:32:06.501573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:33.978 [2024-07-25 07:32:06.501832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.978 [2024-07-25 07:32:06.501855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.978 [2024-07-25 07:32:06.501890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.505334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.514181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.514671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.514701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.514727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.515009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.515222] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.515271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.515294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.518419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.527651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.528117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.528148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.528175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.528441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.528734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.528758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.528782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.532195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.540904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.541284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.541315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.541341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.541613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.541827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.541848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.541868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.544954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.554048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.554528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.554573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.554600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.554868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.555081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.555103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.555123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.558236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.567296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.567763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.567795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.567822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.568107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.568365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.568388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.568409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.571431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.580588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.581003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.581034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.581060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.581352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.581591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.581627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.581647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.584660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.593809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.594232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.594269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.594295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.594582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.594795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.594817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.594837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.597886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.607058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.607481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.607513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.607540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.607831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.608044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.608065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.237 [2024-07-25 07:32:06.608085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.237 [2024-07-25 07:32:06.611178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.237 [2024-07-25 07:32:06.620415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.237 [2024-07-25 07:32:06.620911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.237 [2024-07-25 07:32:06.620942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.237 [2024-07-25 07:32:06.620970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.237 [2024-07-25 07:32:06.621275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.237 [2024-07-25 07:32:06.621514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.237 [2024-07-25 07:32:06.621537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.621558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.624587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.633715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.634102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.634132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.634158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.634464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.634714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.634736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.634756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.637787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.646974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.647385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.647415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.647441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.647722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.647935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.647956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.647982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.651032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.660332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.660746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.660777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.660803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.661086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.661341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.661364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.661386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.664437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.673580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.673949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.673978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.674002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.674290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.674516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.674539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.674560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.677583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.686881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.687324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.687354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.687380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.687650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.687862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.687883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.687903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.690939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.700152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.700639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.700673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.700699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.700970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.701197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.701218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.701238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.704304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.713459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.713908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.713937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.713963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.714230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.714491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.714514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.714535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.717576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.726746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.727121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.727150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.727175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.727471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.727717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.238 [2024-07-25 07:32:06.727738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.238 [2024-07-25 07:32:06.727758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.238 [2024-07-25 07:32:06.730731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.238 [2024-07-25 07:32:06.740066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.238 [2024-07-25 07:32:06.740501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.238 [2024-07-25 07:32:06.740532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.238 [2024-07-25 07:32:06.740558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.238 [2024-07-25 07:32:06.740838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.238 [2024-07-25 07:32:06.741055] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.239 [2024-07-25 07:32:06.741077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.239 [2024-07-25 07:32:06.741096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.239 [2024-07-25 07:32:06.744140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.239 [2024-07-25 07:32:06.753313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.239 [2024-07-25 07:32:06.753743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.239 [2024-07-25 07:32:06.753772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.239 [2024-07-25 07:32:06.753796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.239 [2024-07-25 07:32:06.754062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.239 [2024-07-25 07:32:06.754301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.239 [2024-07-25 07:32:06.754323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.239 [2024-07-25 07:32:06.754344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.239 [2024-07-25 07:32:06.757355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.766839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.767257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.767289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.767315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.767602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.767815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.767837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.767856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.771173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.780240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.780742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.780774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.780801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.781082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.781355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.781380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.781402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.784781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.793689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.794054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.794108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.794390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.794642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.794663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.794683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.797806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.807078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.807518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.807549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.807576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.807861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.808073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.808094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.808114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.811204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.820431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.820865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.820897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.820923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.821207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.821467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.821490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.821511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.824643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.833756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.834119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.834149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.834180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.834502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.834746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.834784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.834805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.837837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.847039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.847478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.847509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.847536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.847830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.848042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.848065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.848085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.851120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.860310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.860743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.860773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.860800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.861100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.861375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.861398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.498 [2024-07-25 07:32:06.861420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.498 [2024-07-25 07:32:06.864445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.498 [2024-07-25 07:32:06.873677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.498 [2024-07-25 07:32:06.874097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.498 [2024-07-25 07:32:06.874127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.498 [2024-07-25 07:32:06.874153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.498 [2024-07-25 07:32:06.874453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.498 [2024-07-25 07:32:06.874701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.498 [2024-07-25 07:32:06.874727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.874748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.877748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.886885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.887304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.887336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.887363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.887650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.887863] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.887884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.887904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.890939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.900100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.900522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.900568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.900593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.900861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.901074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.901095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.901115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.904159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.913459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.913953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.913984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.914009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.914309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.914551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.914572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.914606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.917615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.926720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.927135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.927165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.927190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.927496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.927743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.927764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.927784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.930792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.940004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.940443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.940475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.940501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.940785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.940997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.941018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.941038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.944128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.953325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.953757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.953787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.953813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.954095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.954350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.954373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.954394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.957423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.966571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.967047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.967077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.967102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.967374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.967628] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.967649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.967670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.970638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.979803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.980295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.980326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.980353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.980641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.980853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.980874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.980894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.983939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:06.993167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:06.993651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:06.993682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:06.993709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:06.993979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:06.994198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:06.994220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.499 [2024-07-25 07:32:06.994240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.499 [2024-07-25 07:32:06.997313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.499 [2024-07-25 07:32:07.006509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.499 [2024-07-25 07:32:07.006902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.499 [2024-07-25 07:32:07.006932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.499 [2024-07-25 07:32:07.006956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.499 [2024-07-25 07:32:07.007237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.499 [2024-07-25 07:32:07.007484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.499 [2024-07-25 07:32:07.007507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.500 [2024-07-25 07:32:07.007549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.500 [2024-07-25 07:32:07.010569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.500 [2024-07-25 07:32:07.019868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.500 [2024-07-25 07:32:07.020388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.500 [2024-07-25 07:32:07.020422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.500 [2024-07-25 07:32:07.020449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.500 [2024-07-25 07:32:07.020734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.500 [2024-07-25 07:32:07.020946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.500 [2024-07-25 07:32:07.020968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.500 [2024-07-25 07:32:07.020988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.500 [2024-07-25 07:32:07.024296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.758 [2024-07-25 07:32:07.033478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.758 [2024-07-25 07:32:07.033924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.758 [2024-07-25 07:32:07.033956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.758 [2024-07-25 07:32:07.033982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.758 [2024-07-25 07:32:07.034281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.758 [2024-07-25 07:32:07.034523] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.758 [2024-07-25 07:32:07.034548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.758 [2024-07-25 07:32:07.034570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.758 [2024-07-25 07:32:07.038022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.758 [2024-07-25 07:32:07.046792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.758 [2024-07-25 07:32:07.047274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.758 [2024-07-25 07:32:07.047306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.758 [2024-07-25 07:32:07.047332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.758 [2024-07-25 07:32:07.047625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.758 [2024-07-25 07:32:07.047837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.758 [2024-07-25 07:32:07.047858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.758 [2024-07-25 07:32:07.047878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.758 [2024-07-25 07:32:07.050991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.758 [2024-07-25 07:32:07.060092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.060510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.060556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.060581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.060845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.061057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.061078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.061098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.064181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.073355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.073783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.073812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.073837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.074101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.074359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.074383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.074404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.077433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.086595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.087009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.087039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.087064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.087361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.087587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.087621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.087641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.090648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.099820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.100235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.100287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.100315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.100618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.100831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.100852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.100872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.103918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.113117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.113545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.113591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.113617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.113888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.114101] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.114122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.114142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.117171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.126358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.126789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.126819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.126844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.127106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.127361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.127384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.127405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.130431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.140390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.140858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.140893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.140923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.141208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.141485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.141512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.141545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.145188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.154367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.154833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.154868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.154899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.155183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.155461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.155488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.155513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.159155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.168317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.168783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.168819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.168849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.169135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.169414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.169441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.169466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.173105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.182274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.182754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.759 [2024-07-25 07:32:07.182789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.759 [2024-07-25 07:32:07.182818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.759 [2024-07-25 07:32:07.183102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.759 [2024-07-25 07:32:07.183381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.759 [2024-07-25 07:32:07.183408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.759 [2024-07-25 07:32:07.183433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.759 [2024-07-25 07:32:07.187074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.759 [2024-07-25 07:32:07.196238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.759 [2024-07-25 07:32:07.196705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.196745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.196775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.197058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.197337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.197365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.197390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.201030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.760 [2024-07-25 07:32:07.210190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.760 [2024-07-25 07:32:07.210666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.210699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.210729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.211012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.211291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.211318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.211343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.214982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.760 [2024-07-25 07:32:07.224155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.760 [2024-07-25 07:32:07.224628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.224663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.224692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.224975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.225256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.225283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.225308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.228946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.760 [2024-07-25 07:32:07.238111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.760 [2024-07-25 07:32:07.238554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.238588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.238618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.238901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.239174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.239201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.239226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.242875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.760 [2024-07-25 07:32:07.252033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.760 [2024-07-25 07:32:07.252507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.252543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.252572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.252855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.253122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.253149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.253175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.256831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.760 [2024-07-25 07:32:07.265999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.760 [2024-07-25 07:32:07.266477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.266512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.266541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.266824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.267093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.267120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.267145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.270806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.760 [2024-07-25 07:32:07.279976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.760 [2024-07-25 07:32:07.280447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.760 [2024-07-25 07:32:07.280481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:34.760 [2024-07-25 07:32:07.280511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:34.760 [2024-07-25 07:32:07.280794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:34.760 [2024-07-25 07:32:07.281062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.760 [2024-07-25 07:32:07.281088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.760 [2024-07-25 07:32:07.281113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.760 [2024-07-25 07:32:07.284773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.293950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.294397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.294433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.294463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.294747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.295016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.295042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.295067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.298717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.307884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.308357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.308392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.308422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.308709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.308980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.309006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.309031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.312685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.321869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.322349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.322384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.322414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.322698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.322965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.322992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.323017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.326670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.335829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.336340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.336375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.336412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.336695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.336963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.336990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.337015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.340666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.349835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.350299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.350334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.350364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.350647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.350917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.350944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.350970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.354624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.363795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.364268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.364303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.364333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.364616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.364884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.364910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.364935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.368591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.377761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.378226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.378268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.378298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.378581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.019 [2024-07-25 07:32:07.378849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.019 [2024-07-25 07:32:07.378881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.019 [2024-07-25 07:32:07.378908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.019 [2024-07-25 07:32:07.382558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.019 [2024-07-25 07:32:07.391875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.019 [2024-07-25 07:32:07.392318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.019 [2024-07-25 07:32:07.392353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.019 [2024-07-25 07:32:07.392382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.019 [2024-07-25 07:32:07.392665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.392935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.392963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.392988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.396645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.405815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.406255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.406290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.406320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.406604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.406871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.406898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.406923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.410574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.419755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.420220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.420263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.420294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.420591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.420860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.420886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.420911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.424561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.433732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.434171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.434206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.434235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.434531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.434801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.434828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.434853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.438503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.447664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.448132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.448167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.448196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.448490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.448757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.448784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.448809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.452460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.461630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.462109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.462144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.462173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.462481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.462749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.462775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.462801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.466455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.475626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.476046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.476092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.476123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.476429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.476707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.476734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.476759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.480411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.489587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.490184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.490248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.490280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.490564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.490832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.490860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.490886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.494548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.503538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.504087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.504122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.504152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.504448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.504717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.504744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.504769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.508435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.517611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.518067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.518101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.518130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.518424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.518696] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.020 [2024-07-25 07:32:07.518724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.020 [2024-07-25 07:32:07.518756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.020 [2024-07-25 07:32:07.522422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.020 [2024-07-25 07:32:07.531623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.020 [2024-07-25 07:32:07.532126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.020 [2024-07-25 07:32:07.532162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.020 [2024-07-25 07:32:07.532192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.020 [2024-07-25 07:32:07.532491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.020 [2024-07-25 07:32:07.532761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.021 [2024-07-25 07:32:07.532788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.021 [2024-07-25 07:32:07.532814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.021 [2024-07-25 07:32:07.536473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.021 [2024-07-25 07:32:07.545664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.021 [2024-07-25 07:32:07.546105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.021 [2024-07-25 07:32:07.546140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.021 [2024-07-25 07:32:07.546170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.021 [2024-07-25 07:32:07.546472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.021 [2024-07-25 07:32:07.546749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.021 [2024-07-25 07:32:07.546775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.021 [2024-07-25 07:32:07.546800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.279 [2024-07-25 07:32:07.550450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.279 [2024-07-25 07:32:07.559620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.279 [2024-07-25 07:32:07.560163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.279 [2024-07-25 07:32:07.560215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.279 [2024-07-25 07:32:07.560257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.279 [2024-07-25 07:32:07.560545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.279 [2024-07-25 07:32:07.560813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.279 [2024-07-25 07:32:07.560840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.279 [2024-07-25 07:32:07.560866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.279 [2024-07-25 07:32:07.564515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.279 [2024-07-25 07:32:07.573678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.279 [2024-07-25 07:32:07.574198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.279 [2024-07-25 07:32:07.574232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.279 [2024-07-25 07:32:07.574272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.279 [2024-07-25 07:32:07.574560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.279 [2024-07-25 07:32:07.574838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.279 [2024-07-25 07:32:07.574865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.279 [2024-07-25 07:32:07.574890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.279 [2024-07-25 07:32:07.578543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.279 [2024-07-25 07:32:07.587782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.279 [2024-07-25 07:32:07.588271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.279 [2024-07-25 07:32:07.588310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.279 [2024-07-25 07:32:07.588339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.279 [2024-07-25 07:32:07.588630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.279 [2024-07-25 07:32:07.588898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.279 [2024-07-25 07:32:07.588925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.279 [2024-07-25 07:32:07.588950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.279 [2024-07-25 07:32:07.592611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.279 [2024-07-25 07:32:07.601790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.279 [2024-07-25 07:32:07.602252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.279 [2024-07-25 07:32:07.602294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.602325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.602607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.602883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.602910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.602935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.606590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.615762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.616235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.616279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.616309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.616601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.616869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.616895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.616920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.620587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.629751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.630215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.630260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.630301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.630581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.630849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.630876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.630901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.634556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.643734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.644313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.644349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.644378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.644665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.644932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.644960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.644985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.648644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.657626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.658111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.658146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.658176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.658476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.658745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.658771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.658802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.662449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.671618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.672082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.672117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.672146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.672441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.672709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.672735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.672760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.676410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.685573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.686057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.686092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.686122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.686415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.686684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.686710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.686735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.690390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.699552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.700156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.700210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.700239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.700538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.700806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.700833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.700858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.704501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.713507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.713981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.714022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.714052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.714349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.714617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.280 [2024-07-25 07:32:07.714644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.280 [2024-07-25 07:32:07.714669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.280 [2024-07-25 07:32:07.718319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.280 [2024-07-25 07:32:07.727496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.280 [2024-07-25 07:32:07.727961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.280 [2024-07-25 07:32:07.727996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.280 [2024-07-25 07:32:07.728025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.280 [2024-07-25 07:32:07.728320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.280 [2024-07-25 07:32:07.728588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.281 [2024-07-25 07:32:07.728615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.281 [2024-07-25 07:32:07.728640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.281 [2024-07-25 07:32:07.732285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.281 [2024-07-25 07:32:07.741457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.281 [2024-07-25 07:32:07.741919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.281 [2024-07-25 07:32:07.741954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.281 [2024-07-25 07:32:07.741984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.281 [2024-07-25 07:32:07.742287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.281 [2024-07-25 07:32:07.742555] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.281 [2024-07-25 07:32:07.742582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.281 [2024-07-25 07:32:07.742607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.281 [2024-07-25 07:32:07.746249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.281 [2024-07-25 07:32:07.755406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.281 [2024-07-25 07:32:07.755847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.281 [2024-07-25 07:32:07.755882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.281 [2024-07-25 07:32:07.755912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.281 [2024-07-25 07:32:07.756194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.281 [2024-07-25 07:32:07.756479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.281 [2024-07-25 07:32:07.756507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.281 [2024-07-25 07:32:07.756532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.281 [2024-07-25 07:32:07.760170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.281 [2024-07-25 07:32:07.769338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.281 [2024-07-25 07:32:07.769809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.281 [2024-07-25 07:32:07.769843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.281 [2024-07-25 07:32:07.769873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.281 [2024-07-25 07:32:07.770154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.281 [2024-07-25 07:32:07.770435] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.281 [2024-07-25 07:32:07.770462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.281 [2024-07-25 07:32:07.770488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.281 [2024-07-25 07:32:07.774155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.281 [2024-07-25 07:32:07.783321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.281 [2024-07-25 07:32:07.783783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.281 [2024-07-25 07:32:07.783818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.281 [2024-07-25 07:32:07.783847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.281 [2024-07-25 07:32:07.784132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.281 [2024-07-25 07:32:07.784410] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.281 [2024-07-25 07:32:07.784437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.281 [2024-07-25 07:32:07.784462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.281 [2024-07-25 07:32:07.788100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.281 [2024-07-25 07:32:07.797300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.281 [2024-07-25 07:32:07.797764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.281 [2024-07-25 07:32:07.797799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.281 [2024-07-25 07:32:07.797829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.281 [2024-07-25 07:32:07.798112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.281 [2024-07-25 07:32:07.798390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.281 [2024-07-25 07:32:07.798416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.281 [2024-07-25 07:32:07.798442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.281 [2024-07-25 07:32:07.802088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.540 [2024-07-25 07:32:07.811259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.540 [2024-07-25 07:32:07.811697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.540 [2024-07-25 07:32:07.811731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.540 [2024-07-25 07:32:07.811761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.540 [2024-07-25 07:32:07.812045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.540 [2024-07-25 07:32:07.812322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.540 [2024-07-25 07:32:07.812349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.540 [2024-07-25 07:32:07.812374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.540 [2024-07-25 07:32:07.816015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.540 [2024-07-25 07:32:07.825139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.540 [2024-07-25 07:32:07.825633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.540 [2024-07-25 07:32:07.825669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.540 [2024-07-25 07:32:07.825699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.540 [2024-07-25 07:32:07.825981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.540 [2024-07-25 07:32:07.826258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.826285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.826310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.829957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.839126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.839618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.839654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.839683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.839967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.840237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.840276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.840302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.843947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.853107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.853580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.853614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.853652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.853934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.854202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.854228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.854266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.857908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.867071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.867543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.867578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.867608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.867891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.868159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.868185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.868210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.871865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.881024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.881496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.881530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.881560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.881842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.882111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.882138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.882163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.885816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.894973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.895440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.895475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.895504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.895786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.896053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.896085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.896112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.899770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.908924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.909371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.909406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.909436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.909717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.909985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.910012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.910038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.913691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.922863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.923332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.923367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.923397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.923680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.923950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.923977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.924001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.927655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.936819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.937298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.937332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.937362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.937645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.937912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.937939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.937965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.941619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.950795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.951258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.951293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.951322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.951605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.951872] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.951899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.541 [2024-07-25 07:32:07.951924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.541 [2024-07-25 07:32:07.955578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.541 [2024-07-25 07:32:07.964744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.541 [2024-07-25 07:32:07.965184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.541 [2024-07-25 07:32:07.965219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.541 [2024-07-25 07:32:07.965259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.541 [2024-07-25 07:32:07.965542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.541 [2024-07-25 07:32:07.965813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.541 [2024-07-25 07:32:07.965839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:07.965864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:07.969514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 [2024-07-25 07:32:07.978676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 [2024-07-25 07:32:07.979142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:07.979176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:07.979206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:07.979501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:07.979769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:07.979795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:07.979820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:07.983471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 [2024-07-25 07:32:07.992629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 [2024-07-25 07:32:07.993086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:07.993121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:07.993151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:07.993453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:07.993722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:07.993748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:07.993773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:07.997425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2572818 Killed "${NVMF_APP[@]}" "$@" 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:35.542 [2024-07-25 07:32:08.006607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.542 [2024-07-25 07:32:08.007070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:08.007105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:08.007135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:08.007435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:08.007704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:08.007732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:08.007757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2573904 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2573904 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2573904 ']' 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.542 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.542 [2024-07-25 07:32:08.011412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 [2024-07-25 07:32:08.020589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 [2024-07-25 07:32:08.021114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:08.021151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:08.021188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:08.021487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:08.021768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:08.021795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:08.021820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:08.025483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 [2024-07-25 07:32:08.034682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 [2024-07-25 07:32:08.035146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:08.035181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:08.035211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:08.035502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:08.035772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:08.035799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:08.035824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:08.039482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 [2024-07-25 07:32:08.048654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 [2024-07-25 07:32:08.049097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:08.049133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:08.049162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:08.049460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:08.049728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:08.049755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:08.049781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:08.053435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.542 [2024-07-25 07:32:08.060781] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:35.542 [2024-07-25 07:32:08.060854] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.542 [2024-07-25 07:32:08.062746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.542 [2024-07-25 07:32:08.063196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.542 [2024-07-25 07:32:08.063233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.542 [2024-07-25 07:32:08.063274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.542 [2024-07-25 07:32:08.063563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.542 [2024-07-25 07:32:08.063830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.542 [2024-07-25 07:32:08.063857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.542 [2024-07-25 07:32:08.063883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.542 [2024-07-25 07:32:08.067548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.076903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.077378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.077413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.077443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.077727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.077996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.078023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.078049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.081699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.090858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.091332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.091368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.091397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.091683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.091951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.091978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.092004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.095653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.802 [2024-07-25 07:32:08.104814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.105282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.105319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.105349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.105633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.105900] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.105927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.105959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.109616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.118786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.119225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.119270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.119301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.119583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.119852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.119879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.119903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.123584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.132763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.133228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.133272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.133301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.133589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.133857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.133884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.133910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.137535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.802 [2024-07-25 07:32:08.137559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.146770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.147431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.147480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.147515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.147806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.148084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.148111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.148142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.151814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.160829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.161398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.161436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.161468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.161761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.162031] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.162058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.162084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.165739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.174913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.175389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.175426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.175455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.175738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.176006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.176033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.176059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.802 [2024-07-25 07:32:08.179580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.802 [2024-07-25 07:32:08.188369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.802 [2024-07-25 07:32:08.188835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.802 [2024-07-25 07:32:08.188867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.802 [2024-07-25 07:32:08.188894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.802 [2024-07-25 07:32:08.189185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.802 [2024-07-25 07:32:08.189434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.802 [2024-07-25 07:32:08.189458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.802 [2024-07-25 07:32:08.189479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.192634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.201806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.202334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.202375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.202405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.202695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.202920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.202943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.202970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.206148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.215275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.215901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.215939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.215970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.216284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.216557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.216581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.216606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.219722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.228652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.229089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.229121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.229149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.229438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.229684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.229706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.229727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.232868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.241966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.242441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.242473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.242501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.242793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.243014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.243036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.243070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.246267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.251427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.803 [2024-07-25 07:32:08.251458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.803 [2024-07-25 07:32:08.251472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.803 [2024-07-25 07:32:08.251483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.803 [2024-07-25 07:32:08.251492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.803 [2024-07-25 07:32:08.251558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.803 [2024-07-25 07:32:08.251619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.803 [2024-07-25 07:32:08.251621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.803 [2024-07-25 07:32:08.255628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.256177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.256212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.256252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.256516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.256771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.256795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.256820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.260129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.269284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.269968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.270014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.270047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.270317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.270578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.270603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.270630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.273928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.282865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.283532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.283582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.283616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.283903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.284142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.284166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.284194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.287548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.296476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.297170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.297218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.803 [2024-07-25 07:32:08.297259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.803 [2024-07-25 07:32:08.297526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.803 [2024-07-25 07:32:08.297774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.803 [2024-07-25 07:32:08.297799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.803 [2024-07-25 07:32:08.297827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.803 [2024-07-25 07:32:08.301113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.803 [2024-07-25 07:32:08.310204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.803 [2024-07-25 07:32:08.310740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.803 [2024-07-25 07:32:08.310784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.804 [2024-07-25 07:32:08.310816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.804 [2024-07-25 07:32:08.311091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.804 [2024-07-25 07:32:08.311360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.804 [2024-07-25 07:32:08.311385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.804 [2024-07-25 07:32:08.311412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.804 [2024-07-25 07:32:08.314735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.804 [2024-07-25 07:32:08.323888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.804 [2024-07-25 07:32:08.324561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.804 [2024-07-25 07:32:08.324610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:35.804 [2024-07-25 07:32:08.324643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:35.804 [2024-07-25 07:32:08.324919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:35.804 [2024-07-25 07:32:08.325158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.804 [2024-07-25 07:32:08.325182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.804 [2024-07-25 07:32:08.325237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.804 [2024-07-25 07:32:08.328620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 [2024-07-25 07:32:08.337578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.338110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.338154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.338185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.338458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.338712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.338737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.338763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.342064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 [2024-07-25 07:32:08.351254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.351667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.351699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.351726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.351994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.352251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.352275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.352298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.355624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 [2024-07-25 07:32:08.364845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.365236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.365274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.365301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.365557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.365798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.365823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.365846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.369128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.063 [2024-07-25 07:32:08.378371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.378829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.378861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.378888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.379158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.379433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.379458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.379481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.382793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 [2024-07-25 07:32:08.391888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.392305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.392338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.392367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.392636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.392869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.392894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.392918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.396214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.063 [2024-07-25 07:32:08.404956] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.063 [2024-07-25 07:32:08.405563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.405965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.405996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.406023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.406287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.406539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.406584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.406607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.409905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 [2024-07-25 07:32:08.419133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.419558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.419590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.419617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.419895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.420122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.420144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.063 [2024-07-25 07:32:08.420165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.063 [2024-07-25 07:32:08.423495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.063 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.063 [2024-07-25 07:32:08.432711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.063 [2024-07-25 07:32:08.433119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.063 [2024-07-25 07:32:08.433150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.063 [2024-07-25 07:32:08.433177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.063 [2024-07-25 07:32:08.433445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.063 [2024-07-25 07:32:08.433696] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.063 [2024-07-25 07:32:08.433720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.064 [2024-07-25 07:32:08.433741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.064 [2024-07-25 07:32:08.437040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.064 [2024-07-25 07:32:08.446399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.064 [2024-07-25 07:32:08.447081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.064 [2024-07-25 07:32:08.447130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.064 [2024-07-25 07:32:08.447164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.064 [2024-07-25 07:32:08.447436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.064 [2024-07-25 07:32:08.447699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.064 [2024-07-25 07:32:08.447735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.064 [2024-07-25 07:32:08.447765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.064 [2024-07-25 07:32:08.451068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.064 Malloc0 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.064 [2024-07-25 07:32:08.459990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.064 [2024-07-25 07:32:08.460524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.064 [2024-07-25 07:32:08.460557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.064 [2024-07-25 07:32:08.460586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.064 [2024-07-25 07:32:08.460856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.064 [2024-07-25 07:32:08.461090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.064 [2024-07-25 07:32:08.461114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.064 [2024-07-25 07:32:08.461137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.064 [2024-07-25 07:32:08.464446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.064 [2024-07-25 07:32:08.473513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.064 [2024-07-25 07:32:08.473936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.064 [2024-07-25 07:32:08.473968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430840 with addr=10.0.0.2, port=4420 00:26:36.064 [2024-07-25 07:32:08.473994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430840 is same with the state(5) to be set 00:26:36.064 [2024-07-25 07:32:08.474257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430840 (9): Bad file descriptor 00:26:36.064 [2024-07-25 07:32:08.474500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.064 [2024-07-25 07:32:08.474524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.064 [2024-07-25 07:32:08.474561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.064 [2024-07-25 07:32:08.474832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.064 [2024-07-25 07:32:08.477936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.064 07:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2573235 00:26:36.064 [2024-07-25 07:32:08.487196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.064 [2024-07-25 07:32:08.562522] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:46.030 00:26:46.030 Latency(us) 00:26:46.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.030 Verification LBA range: start 0x0 length 0x4000 00:26:46.030 Nvme1n1 : 15.01 6644.91 25.96 8913.24 0.00 8202.48 831.34 18544.26 00:26:46.030 =================================================================================================================== 00:26:46.030 Total : 6644.91 25.96 8913.24 0.00 8202.48 831.34 18544.26 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.030 rmmod nvme_tcp 00:26:46.030 rmmod nvme_fabrics 00:26:46.030 rmmod nvme_keyring 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2573904 ']' 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2573904 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2573904 ']' 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2573904 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2573904 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2573904' 00:26:46.030 killing process with pid 2573904 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2573904 00:26:46.030 07:32:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2573904 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.030 07:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.930 00:26:47.930 real 0m23.276s 00:26:47.930 user 1m3.104s 00:26:47.930 sys 0m4.208s 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.930 ************************************ 00:26:47.930 END TEST nvmf_bdevperf 00:26:47.930 ************************************ 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.930 ************************************ 00:26:47.930 START TEST nvmf_target_disconnect 00:26:47.930 ************************************ 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:47.930 * Looking for test storage... 00:26:47.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.930 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.931 07:32:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.866 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:26:49.867 00:26:49.867 --- 10.0.0.2 ping statistics --- 00:26:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.867 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:49.867 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:26:50.127 00:26:50.127 --- 10.0.0.1 ping statistics --- 00:26:50.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.127 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.127 ************************************ 00:26:50.127 START TEST nvmf_target_disconnect_tc1 00:26:50.127 ************************************ 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.127 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.127 [2024-07-25 07:32:22.524485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.127 [2024-07-25 07:32:22.524567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fd1a0 with addr=10.0.0.2, port=4420 00:26:50.127 [2024-07-25 07:32:22.524611] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:50.127 [2024-07-25 07:32:22.524642] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:50.127 [2024-07-25 07:32:22.524657] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:50.127 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:50.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:50.127 Initializing NVMe Controllers 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.127 00:26:50.127 real 0m0.089s 00:26:50.127 user 0m0.038s 00:26:50.127 sys 0m0.052s 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.127 ************************************ 00:26:50.127 END TEST nvmf_target_disconnect_tc1 00:26:50.127 ************************************ 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.127 ************************************ 00:26:50.127 START TEST nvmf_target_disconnect_tc2 00:26:50.127 ************************************ 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2577052 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2577052 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2577052 ']' 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.127 07:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.127 [2024-07-25 07:32:22.639983] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:50.127 [2024-07-25 07:32:22.640072] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.386 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.386 [2024-07-25 07:32:22.709738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.386 [2024-07-25 07:32:22.832648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.386 [2024-07-25 07:32:22.832719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.386 [2024-07-25 07:32:22.832737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.386 [2024-07-25 07:32:22.832751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.386 [2024-07-25 07:32:22.832762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.386 [2024-07-25 07:32:22.832859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:26:50.386 [2024-07-25 07:32:22.832921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:26:50.386 [2024-07-25 07:32:22.832974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:26:50.386 [2024-07-25 07:32:22.832978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 Malloc0 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 [2024-07-25 07:32:23.644179] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 [2024-07-25 07:32:23.672464] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2577204 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:51.318 07:32:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:51.318 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.222 07:32:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2577052 00:26:53.222 07:32:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 [2024-07-25 07:32:25.697713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Write completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.222 Read completed with error (sct=0, sc=8) 00:26:53.222 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Write completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Write completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Write completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Write completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 [2024-07-25 07:32:25.698036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.223 [2024-07-25 07:32:25.698221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.698260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.698412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.698438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.698581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.698605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.698794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.698821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.699036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.699079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.699238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.699273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.699425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.699452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.699631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.699658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.699887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.699913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.700048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.700073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.700257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.700284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.700416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.700442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.700571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.700597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.700753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.700780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.700916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.700943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.701097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.701123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.701255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.701282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.701411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.701437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.701594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.701622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.701778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.701804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.701929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.701955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.702091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.702118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.702282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.702309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.702470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.702497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.702632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.702660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.702815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.702842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.703027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.703054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.703214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.703240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.703385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.703413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.703547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.703579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 [2024-07-25 07:32:25.703816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.223 [2024-07-25 07:32:25.703861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:53.223 qpair failed and we were unable to recover it. 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.223 starting I/O failed 00:26:53.223 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Write completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 Read completed with error (sct=0, sc=8) 00:26:53.224 starting I/O failed 00:26:53.224 [2024-07-25 07:32:25.704193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.224 [2024-07-25 07:32:25.704380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.704421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.704567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.704595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.704756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.704783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.704930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.704957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.705117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.705146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.705291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.705321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.705457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.705485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.705641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.705667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.705834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.705860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.706043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.706072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.706256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.706285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.706413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.706440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.706568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.706595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.706751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.706777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.706931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.706957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.707103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.707129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.707282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.707309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.707468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.707494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.707631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.707657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.707819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.707858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.707997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.708041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.708201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.708227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.708368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.708395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.708519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.708546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.708698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.708725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.708875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.708901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.224 [2024-07-25 07:32:25.709019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.224 [2024-07-25 07:32:25.709046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.224 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.709193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.709220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Read completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 Write completed with error (sct=0, sc=8) 00:26:53.225 starting I/O failed 00:26:53.225 [2024-07-25 07:32:25.709557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:53.225 [2024-07-25 07:32:25.709839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.709874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.710070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.710098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.710257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.710285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.710419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.710446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.710583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.710612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.710840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.710867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.711011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.711053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.711266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.711294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.711431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.711458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.711589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.711615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.712406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.712433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.712556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.712582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.712761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.712786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.712920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.712962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.713185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.713214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.713381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.713408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.713603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.713629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.713852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.713891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.714093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.714119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.714255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.714282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.714403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.714429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.714557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.714587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.714761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.714788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.714982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.715052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.225 qpair failed and we were unable to recover it. 00:26:53.225 [2024-07-25 07:32:25.715295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.225 [2024-07-25 07:32:25.715326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.715451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.715478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.715669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.715695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.715935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.715965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.716163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.716192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.716368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.716394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.716520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.716558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.716802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.716827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.716958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.716983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.717188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.717214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.717415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.717440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.717678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.717723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.717944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.717991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.718190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.718219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.718388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.718414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.718543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.718569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.718692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.718718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.718901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.718926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.719062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.719091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.719262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.719289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.719424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.719451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.719652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.719678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.719869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.719895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.720100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.720129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.720308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.720336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.720469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.720495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.720670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.720711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.720924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.720969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.721143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.721170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.721327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.721355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.721487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.721513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.721726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.721751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.721958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.721986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.722220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.722260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.722458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.722484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.722624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.722666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.722929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.722975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.226 qpair failed and we were unable to recover it. 00:26:53.226 [2024-07-25 07:32:25.723167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.226 [2024-07-25 07:32:25.723207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.723396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.723423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.723581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.723613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.723759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.723785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.723924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.723965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.724187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.724214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.724378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.724405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.724550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.724592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.724769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.724795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.724948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.724975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.725162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.725199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.725423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.725460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.725631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.725657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.725806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.725833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.725991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.726029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.726230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.726272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.726409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.726436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.726597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.726625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.726781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.726820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.726977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.727010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.727181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.727207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.727374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.727401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.727525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.727566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.727691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.727721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.727922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.727964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.728191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.728235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.728405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.728433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.227 [2024-07-25 07:32:25.728604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.227 [2024-07-25 07:32:25.728638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.227 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.728808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.728835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.728995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.729021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.729204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.729233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.729432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.729458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.729613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.729640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.729790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.729834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.730032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.730058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.730236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.730288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.730481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.730507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.730678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.730705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.730875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.730903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.731038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.731064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.731219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.731257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.731392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.731420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.731552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.731611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.731873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.731919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.732100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.732127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.732318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.732345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.732493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.732521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.732738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.732764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.732913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.732942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.733128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.733166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.733337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.733365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.733523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.733560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.733716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.733743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.733904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.733931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.734053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.734079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.734234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.734428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.734454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.734582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.734607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.734798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.734824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.735011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.735036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.735225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.735266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.735420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.735446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.735641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.735667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.735866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.735895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.736104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.228 [2024-07-25 07:32:25.736130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.228 qpair failed and we were unable to recover it. 00:26:53.228 [2024-07-25 07:32:25.736295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.736321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.736471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.736497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.736679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.736708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.736875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.736901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.737039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.737084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.737265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.737293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.737427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.737454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.737626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.737655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.737796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.737826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.738009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.738035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.738184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.738210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.738382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.738427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.738594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.738629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.738760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.738787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.738915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.738942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.739120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.739146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.739329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.739359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.739501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.739548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.739733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.739759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.739896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.739941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.740113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.740141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.740307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.740334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.740480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.740509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.740683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.740709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.740884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.741048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.741076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.741254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.741283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.741462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.741488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.741645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.741671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.741826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.741852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.741986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.742012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.742138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.742164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.742326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.742352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.742474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.742501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.742633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.742660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.742812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.742855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.743030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.743057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.229 [2024-07-25 07:32:25.743234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.229 [2024-07-25 07:32:25.743274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.229 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.743443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.743468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.743633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.743659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.743842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.743872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.744020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.744049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.744254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.744281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.744418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.744444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.744583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.744610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.744742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.744768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.744920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.744945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.745092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.745118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.745276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.745303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.745432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.745458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.745608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.745634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.745789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.745815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.745981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.746010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.746208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.746255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.746404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.746431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.746557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.746602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.746763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.746791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.746966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.746996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.747170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.747199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.230 [2024-07-25 07:32:25.747356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.230 [2024-07-25 07:32:25.747386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.230 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.747538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.747564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.747731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.747757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.747956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.748003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.748177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.748203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.748370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.748397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.748556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.748602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.748767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.748793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.748915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.749099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.749125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.749274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.749301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.749421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.749447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.749599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.749625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.749803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.750002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.750031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.750210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.750236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.750378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.750404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.750582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.750626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.750805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.750853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.751031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.751057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.751233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.751266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.751390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.751416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.751569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.514 [2024-07-25 07:32:25.751596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.514 qpair failed and we were unable to recover it. 00:26:53.514 [2024-07-25 07:32:25.751772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.751815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.751989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.752016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.752197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.752223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.752390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.752417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.752596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.752622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.752790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.752816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.752964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.752989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.753150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.753175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.753322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.753349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.753494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.753520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.753707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.753733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.753896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.753922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.754105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.754134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.754317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.754347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.754516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.754547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.754711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.754744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.754872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.754900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.755062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.755088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.755237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.755288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.755424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.755453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.755599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.755625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.755746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.755790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.755959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.755985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.756136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.756162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.756305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.756332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.756483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.756509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.756668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.756693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.756813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.756838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.757024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.757052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.757232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.757278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.757440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.757465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.757645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.757670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.757790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.757816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.757990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.758033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.758200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.515 [2024-07-25 07:32:25.758249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.515 qpair failed and we were unable to recover it. 00:26:53.515 [2024-07-25 07:32:25.758399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.758425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.758546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.758573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.758745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.758772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.758924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.758951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.759122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.759152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.759331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.759358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.759510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.759545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.759730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.759756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.759942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.759971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.760170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.760196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.760342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.760369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.760534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.760563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.760740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.760766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.760922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.760948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.761105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.761131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.761254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.761282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.761414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.761441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.761601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.761643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.761834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.761859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.761986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.762027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.762169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.762206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.762404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.762431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.762553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.762579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.762719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.762749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.762932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.762957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.763114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.763139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.763304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.763331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.763485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.763511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.763654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.763696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.763834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.763863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.764004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.764029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.764177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.764203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.764388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.764415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.764542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.764567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.764753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.764779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.764908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.764934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.765112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.516 [2024-07-25 07:32:25.765137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.516 qpair failed and we were unable to recover it. 00:26:53.516 [2024-07-25 07:32:25.765343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.765373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.765566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.765596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.765765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.765790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.765951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.765992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.766130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.766161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.766346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.766373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.766537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.766564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.766710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.766735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.766888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.766914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.767086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.767116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.767280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.767315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.767499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.767525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.767650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.767691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.767846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.767872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.768002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.768029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.768226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.768270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.768465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.768494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.768670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.768695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.768844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.768887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.769053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.769083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.769223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.769267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.769438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.769467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.769643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.769673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.769878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.769903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.770100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.770129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.770311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.770338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.770487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.770513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.770701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.770727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.770881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.770925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.771087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.771112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.771258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.771302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.771472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.771498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.771679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.771704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.771886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.771917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.772083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.772112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.772320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.772347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.517 [2024-07-25 07:32:25.772475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.517 [2024-07-25 07:32:25.772501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.517 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.772697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.772723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.772878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.772905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.773049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.773079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.773264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.773292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.773452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.773478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.773693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.773719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.773868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.773894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.774049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.774075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.774255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.774285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.774484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.774526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.774704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.774729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.774894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.774920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.775082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.775110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.775283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.775313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.775449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.775475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.775640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.775665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.775828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.775856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.775986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.776012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.776168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.776194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.776349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.776375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.776551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.776577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.776710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.776736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.776858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.776883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.777010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.777036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.777190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.777232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.777421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.777447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.777635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.777661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.777786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.777812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.777939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.777964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.778143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.778169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.778388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.778414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.778558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.778584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.778745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.778789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.778959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.778988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.779221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.779269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.779464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.779490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.779663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.518 [2024-07-25 07:32:25.779692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.518 qpair failed and we were unable to recover it. 00:26:53.518 [2024-07-25 07:32:25.779835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.779861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.780016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.780042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.780206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.780231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.780370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.780396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.780553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.780578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.780725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.780751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.780908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.780933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.781060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.781086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.781263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.781289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.781478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.781504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.781665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.781691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.781852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.781878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.782028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.782054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.782232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.782268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.782449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.782479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.782653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.782685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.782861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.782895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.783068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.783094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.783267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.783294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.783447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.783473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.783625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.783651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.783833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.783858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.784054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.784082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.784226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.784262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.784426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.784452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.784580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.784606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.784734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.784760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.784992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.785018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.785221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.785258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.785436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.785462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.785618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.519 [2024-07-25 07:32:25.785645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.519 qpair failed and we were unable to recover it. 00:26:53.519 [2024-07-25 07:32:25.785825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.785851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.786041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.786069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.786238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.786270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.786440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.786470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.786660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.786689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.786838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.786865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.786990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.787018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.787239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.787282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.787463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.787489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.787633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.787662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.787829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.787858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.788000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.788027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.788184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.788227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.788406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.788433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.788552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.788578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.788724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.788767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.788938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.788967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.789158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.789184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.789343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.789369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.789547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.789577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.789755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.789781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.789944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.789973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.790137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.790165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.790314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.790341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.790488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.790532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.790706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.790739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.790919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.790946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.791127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.791153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.791308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.791335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.791489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.791516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.791633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.791659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.791869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.791895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.792023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.792049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.792195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.792221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.792386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.792411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.792531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.792557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.792738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.520 [2024-07-25 07:32:25.792780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.520 qpair failed and we were unable to recover it. 00:26:53.520 [2024-07-25 07:32:25.792945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.792974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.793147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.793173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.793314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.793341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.793498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.793524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.793688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.793713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.793880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.793909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.794100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.794129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.794306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.794332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.794458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.794485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.794612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.794637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.794786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.794813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.794966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.794992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.795131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.795161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.795357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.795384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.795537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.795564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.795719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.795749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.795924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.795951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.796078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.796123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.796263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.796293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.796443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.796469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.796615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.796640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.796790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.796815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.796948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.796975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.797134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.797178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.797371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.797401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.797595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.797621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.797792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.797821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.797994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.798020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.798151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.798181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.798367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.798393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.798518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.798544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.798692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.798718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.798887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.798916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.799082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.799111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.799282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.799309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.799434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.799460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.521 [2024-07-25 07:32:25.799618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.521 [2024-07-25 07:32:25.799643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.521 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.799865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.799890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.800016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.800041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.800205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.800255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.800458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.800484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.800637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.800662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.800823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.800998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.801023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.801173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.801199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.801358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.801385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.801517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.801543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.801697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.801723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.801873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.801898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.802021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.802047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.802201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.802226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.802412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.802437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.802587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.802613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.802787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.802816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.803007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.803035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.803274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.803317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.803445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.803472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.803619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.803648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.803800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.803826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.803987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.804012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.804165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.804192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.804343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.804370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.804520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.804546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.804662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.804688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.804843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.804869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.805066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.805095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.805275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.805302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.805481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.805507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.805652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.805684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.805848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.805876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.806052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.806078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.806277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.806306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.806443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.806472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.806625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.806652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.522 qpair failed and we were unable to recover it. 00:26:53.522 [2024-07-25 07:32:25.806805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.522 [2024-07-25 07:32:25.806831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.806991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.807034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.807210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.807236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.807384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.807411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.807559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.807602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.807776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.807803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.807971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.808000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.808165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.808193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.808388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.808414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.808538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.808564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.808778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.808804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.808958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.808984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.809098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.809124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.809342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.809368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.809529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.809554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.809704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.809730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.809848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.809874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.810026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.810052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.810192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.810221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.810373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.810401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.810554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.810579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.810737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.810778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.810971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.811000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.811190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.811218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.811401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.811427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.811579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.811605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.811729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.811755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.811877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.811902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.812107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.812135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.812298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.812324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.812536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.812562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.812719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.812744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.523 qpair failed and we were unable to recover it. 00:26:53.523 [2024-07-25 07:32:25.812894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.523 [2024-07-25 07:32:25.812920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.813115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.813144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.813313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.813504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.813712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.813738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.813942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.813971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.814141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.814167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.814337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.814367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.814527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.814556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.814698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.814724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.814877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.814918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.815078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.815107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.815263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.815290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.815458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.815487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.815658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.815684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.815839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.815864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.816019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.816045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.816169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.816195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.816341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.816563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.816592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.816762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.816791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.816959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.816984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.817139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.817165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.817311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.817356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.817527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.817554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.817688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.817714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.817837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.817864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.818043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.818069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.818218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.818255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.818434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.818460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.818614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.818640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.818808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.818837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.819032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.819061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.819204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.819230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.819424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.819454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.819630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.819656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.819778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.819804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.524 qpair failed and we were unable to recover it. 00:26:53.524 [2024-07-25 07:32:25.820004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.524 [2024-07-25 07:32:25.820033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.820200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.820229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.820434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.820461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.820598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.820628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.820822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.820848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.820998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.821027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.821181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.821207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.821364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.821409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.821558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.821584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.821733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.821759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.821887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.821914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.822092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.822118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.822331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.822360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.822492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.822521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.822700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.822725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.822852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.822879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.823032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.823059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.823218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.823250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.823374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.823400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.823554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.823580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.823717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.823743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.823890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.823916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.824062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.824091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.824291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.824318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.824466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.824492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.824644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.824670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.824829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.824855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.825002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.825027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.825147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.825173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.825327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.825354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.825505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.825531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.825704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.825734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.825886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.825913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.826036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.826063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.826220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.826253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.826405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.826431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.826639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.826668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.525 [2024-07-25 07:32:25.826845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.525 [2024-07-25 07:32:25.826871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.525 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.827055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.827081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.827205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.827232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.827411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.827437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.827586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.827612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.827740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.827767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.827945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.827990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.828138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.828163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.828315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.828375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.828526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.828558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.828759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.828785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.828943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.828969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.829149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.829175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.829321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.829348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.829522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.829742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.829770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.829938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.829964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.830111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.830137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.830314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.830340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.830488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.830514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.830694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.830720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.830838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.830864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.831003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.831029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.831179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.831205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.831380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.831410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.831598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.831624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.831785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.831811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.831987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.832013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.832215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.832250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.832386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.832411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.832565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.832591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.832748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.832774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.832925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.832951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.833164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.833193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.833399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.833425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.833615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.833659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.833860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.833888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.526 [2024-07-25 07:32:25.834047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.526 [2024-07-25 07:32:25.834074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.526 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.834292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.834320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.834477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.834503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.834637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.834663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.834817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.834860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.835031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.835059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.835223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.835256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.835405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.835430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.835559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.835585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.835730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.835755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.835907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.835932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.836117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.836142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.836303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.836330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.836474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.836503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.836650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.836680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.836883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.836909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.837085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.837113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.837310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.837339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.837508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.837534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.837657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.837702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.837897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.837925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.838121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.838146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.838320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.838349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.838528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.838554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.838716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.838743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.838872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.838902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.839109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.839138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.839274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.839300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.839453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.839495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.839664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.839859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.839884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.840074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.840099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.840270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.840299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.840472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.840499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.840644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.840687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.840858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.840886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.841075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.841104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.527 qpair failed and we were unable to recover it. 00:26:53.527 [2024-07-25 07:32:25.841281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.527 [2024-07-25 07:32:25.841308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.841458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.841484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.841642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.841669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.841946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.842175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.842336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.842488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.842662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.842814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.842962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.842988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.843142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.843167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.843318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.843345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.843472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.843499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.843719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.843745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.843930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.843956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.844078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.844107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.844273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.844300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.844447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.844472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.844652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.844872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.844901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.845043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.845068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.845215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.845262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.845403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.845429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.845562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.845588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.845765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.845791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.845958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.845987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.846132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.846159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.846287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.846313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.846440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.846466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.846620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.846646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.846822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.846850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.847010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.847038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.528 qpair failed and we were unable to recover it. 00:26:53.528 [2024-07-25 07:32:25.847211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.528 [2024-07-25 07:32:25.847237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.847425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.847451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.847597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.847625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.847823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.847849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.848006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.848032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.848187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.848213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.848344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.848370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.848488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.848515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.848674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.848702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.848868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.848894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.849071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.849103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.849316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.849343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.849476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.849502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.849671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.849699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.849859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.849887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.850056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.850082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.850208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.850233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.850361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.850387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.850508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.850534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.850680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.850705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.850891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.850917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.851076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.851102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.851254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.851280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.851438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.851464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.851589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.851615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.851770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.851796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.851913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.851939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.852061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.852087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.852306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.852361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.852506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.852533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.852661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.852688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.852808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.852835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.853008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.853036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.853228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.853277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.853423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.853449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.853592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.853621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.853803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.529 [2024-07-25 07:32:25.853829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.529 qpair failed and we were unable to recover it. 00:26:53.529 [2024-07-25 07:32:25.853985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.854016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.854213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.854251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.854397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.854423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.854543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.854569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.854725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.854751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.854874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.854899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.855057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.855083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.855250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.855293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.855467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.855493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.855661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.855690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.855876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.855902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.856022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.856048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.856211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.856237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.856396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.856422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.856559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.856585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.856739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.856781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.856979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.857005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.857130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.857156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.857312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.857340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.857478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.857503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.857702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.857728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.857924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.857952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.858119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.858147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.858317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.858344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.858495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.858537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.858701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.858729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.858908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.858934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.859073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.859099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.859255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.859283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.859439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.859465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.859642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.859670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.859862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.859891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.860063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.860088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.860289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.860334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.860492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.860517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.860705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.860732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.860854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.860896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.530 qpair failed and we were unable to recover it. 00:26:53.530 [2024-07-25 07:32:25.861068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.530 [2024-07-25 07:32:25.861096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.861268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.861294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.861452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.861477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.861654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.861688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.861834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.861860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.861994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.862020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.862171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.862198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.862359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.862386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.862508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.862550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.862718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.862749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.862920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.862945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.863119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.863148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.863281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.863311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.863511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.863537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.863689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.863716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.863877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.863903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.864032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.864058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.864235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.864271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.864474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.864500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.864628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.864654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.864831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.864857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.865062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.865090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.865272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.865305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.865492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.865518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.865667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.865693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.865850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.865877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.866041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.866070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.866261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.866289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.866441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.866468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.866594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.866638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.866836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.866865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.867033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.867062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.867260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.867287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.867442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.867468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.867661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.867687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.867840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.867868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.868049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.868075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.531 qpair failed and we were unable to recover it. 00:26:53.531 [2024-07-25 07:32:25.868224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.531 [2024-07-25 07:32:25.868262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.868429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.868458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.868648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.868676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.868882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.868908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.869111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.869139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.869350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.869377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.869530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.869560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.869711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.869737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.869885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.869911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.870039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.870066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.870258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.870285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.870443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.870470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.870624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.870650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.870771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.870797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.870980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.871009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.871175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.871201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.871363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.871390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.871538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.871563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.871716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.871742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.871868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.871894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.872075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.872105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.872286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.872316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.872484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.872515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.872687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.872713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.872926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.872978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.873208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.873237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.873403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.873432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.873608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.873634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.873781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.873806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.873964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.874007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.874212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.874238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.874433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.874459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.874716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.874764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.874909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.874938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.875103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.875131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.875284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.875311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.875466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.875508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.532 [2024-07-25 07:32:25.875713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.532 [2024-07-25 07:32:25.875739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.532 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.875899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.875925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.876082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.876108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.876239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.876272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.876449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.876475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.876622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.876823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.876849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.877009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.877034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.877156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.877182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.877309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.877341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.877477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.877503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.877679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.877705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.877822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.877848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.878054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.878083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.878252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.878279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.878435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.878461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.878589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.878615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.878753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.878782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.878959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.878985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.879164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.879192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.879347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.879374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.879532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.879559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.879712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.879738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.879912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.879941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.880074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.880104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.880266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.880296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.880450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.880476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.880687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.880739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.880912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.880941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.881111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.881139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.881317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.881343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.533 [2024-07-25 07:32:25.881514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.533 [2024-07-25 07:32:25.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.533 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.881717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.881745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.881907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.881936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.882107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.882134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.882303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.882333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.882485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.882517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.882698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.882724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.882911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.882937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.883072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.883101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.883295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.883325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.883496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.883526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.883681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.883707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.883911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.883940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.884067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.884096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.884258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.884288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.884441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.884467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.884645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.884688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.884863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.884891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.885098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.885123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.885281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.885308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.885480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.885509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.885654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.885680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.885856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.885881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.886006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.886031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.886177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.886203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.886405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.886431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.886599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.886627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.886834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.886859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.887052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.887081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.887280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.887307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.887496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.887538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.887735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.887761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.887934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.887980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.888150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.888180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.888350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.888380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.888549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.888575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.888779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.888827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.534 qpair failed and we were unable to recover it. 00:26:53.534 [2024-07-25 07:32:25.889042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.534 [2024-07-25 07:32:25.889071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.889237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.889273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.889458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.889484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.889659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.889728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.889897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.889927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.890064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.890093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.890266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.890292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.890469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.890498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.890646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.890679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.890850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.890880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.891056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.891082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.891261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.891307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.891457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.891487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.891656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.891685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.891835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.891862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.892055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.892256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.892286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.892464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.892492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.892642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.892668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.892846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.892904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.893043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.893071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.893250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.893280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.893453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.893480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.893633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.893674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.893848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.893876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.894010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.894039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.894205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.894231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.894424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.894455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.894585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.894614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.894772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.894800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.895003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.895028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.895224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.895261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.895429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.895458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.895625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.895653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.895853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.895879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.896125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.896154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.896326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.896355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.535 qpair failed and we were unable to recover it. 00:26:53.535 [2024-07-25 07:32:25.896524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.535 [2024-07-25 07:32:25.896553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.896695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.896722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.896878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.896921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.897089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.897117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.897292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.897464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.897491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.897649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.897693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.897875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.897900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.898047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.898073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.898238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.898282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.898428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.898454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.898594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.898627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.898796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.898824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.898994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.899020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.899142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.899184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.899382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.899409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.899533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.899573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.899747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.899772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.899918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.899944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.900126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.900170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.900346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.900375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.900555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.900580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.900742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.900768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.900947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.900973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.901129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.901159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.901358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.901385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.901537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.901564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.901703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.901732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.901896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.901925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.902094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.902120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.902268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.902313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.902476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.902505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.902677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.902705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.902876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.902902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.903030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.903056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.903230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.903262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.536 [2024-07-25 07:32:25.903397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.536 [2024-07-25 07:32:25.903423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.536 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.903573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.903599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.903756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.903782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.903931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.903956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.904078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.904105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.904262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.904289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.904446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.904472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.904680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.904706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.904832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.904874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.905045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.905072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.905228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.905261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.905394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.905421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.905601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.905630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.905815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.905840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.905990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.906015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.906193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.906226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.906376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.906405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.906584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.906610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.906754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.906780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.906967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.906996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.907165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.907195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.907387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.907413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.907541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.907585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.907748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.907776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.907946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.907972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.908163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.908189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.908324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.908351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.908499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.908525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.908665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.908694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.908869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.908896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.909051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.909080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.909248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.909277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.909439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.909468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.537 [2024-07-25 07:32:25.909606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.537 [2024-07-25 07:32:25.909632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.537 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.909784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.909810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.909960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.909986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.910156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.910184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.910339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.910367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.910602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.910653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.910814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.910842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.911009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.911038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.911191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.911217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.911354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.911380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.911563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.911591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.911758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.911787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.911958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.911983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.912180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.912209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.912417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.912443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.912635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.912664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.912864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.912890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.913063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.913093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.913266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.913296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.913451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.913477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.913598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.913624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.913741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.913768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.913948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.913982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.914158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.914187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.914394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.914421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.914570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.914634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.914797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.914826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.915006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.915035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.915191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.915216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.915383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.915410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.915535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.915561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.915711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.915737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.915892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.915918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.916093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.916122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.916291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.916321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.916497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.916525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.916711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.916737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.538 qpair failed and we were unable to recover it. 00:26:53.538 [2024-07-25 07:32:25.916936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.538 [2024-07-25 07:32:25.916965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.917125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.917277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.917306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.917458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.917484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.917608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.917634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.917785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.917812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.917982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.918010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.918183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.918209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.918393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.918422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.918586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.918615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.918792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.918817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.918995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.919020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.919215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.919252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.919419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.919446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.919648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.919676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.919820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.919847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.920002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.920046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.920218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.920253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.920403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.920440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.920599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.920625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.920900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.920953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.921123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.921151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.921322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.921348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.921506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.921532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.921682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.921745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.921913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.921946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.922113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.922141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.922288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.922315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.922463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.922504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.922699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.922728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.922887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.922916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.923086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.923112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.923249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.923275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.923418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.923446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.923629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.923658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.923802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.923828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.924024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.924052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.539 qpair failed and we were unable to recover it. 00:26:53.539 [2024-07-25 07:32:25.924248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.539 [2024-07-25 07:32:25.924278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.924445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.924474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.924631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.924656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.924775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.924800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.925011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.925039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.925205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.925233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.925388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.925414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.925536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.925563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.925708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.925737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.925869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.925898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.926101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.926127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.926290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.926320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.926484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.926513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.926700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.926726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.926903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.926928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.927080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.927109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.927267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.927297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.927499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.927524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.927671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.927697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.927823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.927866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.928064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.928093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.928323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.928353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.928516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.928543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.928812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.928862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.929065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.929093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.929267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.929296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.929467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.929492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.929622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.929648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.929834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.929867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.930031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.930060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.930197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.930240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.930438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.930464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.930637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.930666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.930806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.930834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.930983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.931009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.931211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.931240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.931443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.931472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.540 qpair failed and we were unable to recover it. 00:26:53.540 [2024-07-25 07:32:25.931670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.540 [2024-07-25 07:32:25.931699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.931866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.931892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.932067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.932096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.932274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.932314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.932464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.932490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.932654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.932680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.932819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.932849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.933025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.933053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.933250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.933279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.933426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.933453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.933658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.933708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.933874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.933903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.934076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.934106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.934286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.934312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.934443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.934469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.934629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.934655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.934807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.934837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.935012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.935039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.935226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.935276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.935413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.935443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.935619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.935645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.935795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.935822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.935966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.935992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.936125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.936154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.936326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.936355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.936529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.936555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.936685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.936728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.936890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.936918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.937051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.937080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.937232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.937265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.937426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.937455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.937624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.937659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.937824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.937853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.937992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.938018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.938174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.938215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.938425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.938466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.938600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.938627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.938806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.541 [2024-07-25 07:32:25.938833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.541 qpair failed and we were unable to recover it. 00:26:53.541 [2024-07-25 07:32:25.938965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.938991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.939192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.939220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.939432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.939458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.939588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.939616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.939775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.939801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.939968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.939997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.940138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.940172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.940360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.940388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.940543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.940586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.940916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.940968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.941118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.941155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.941335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.941362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.941508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.941550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.941800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.941851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.942040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.942066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.942185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.942211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.942365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.942392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.942513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.942539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.942714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.942739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.942890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.942916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.943084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.943114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.943263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.943290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.943445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.943474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.943602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.943626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.943783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.943825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.944000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.944028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.944180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.944208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.944386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.944412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.944562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.944587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.944853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.944881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.945079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.945107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.945269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.945296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.945449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.542 [2024-07-25 07:32:25.945475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.542 qpair failed and we were unable to recover it. 00:26:53.542 [2024-07-25 07:32:25.945749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.945800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.945962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.945990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.946134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.946159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.946316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.946342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.946496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.946522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.946683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.946708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.946854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.946879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.947050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.947078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.947250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.947294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.947456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.947482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.947639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.947665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.947893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.947949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.948129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.948157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.948305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.948334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.948482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.948509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.948701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.948729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.948986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.949045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.949216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.949251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.949399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.949424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.949557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.949583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.949792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.949820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.949963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.949991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.950143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.950169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.950304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.950330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.950467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.950493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.950647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.950673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.950821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.950847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.951004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.951029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.951234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.951268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.951440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.951465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.951616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.951642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.951822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.951850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.952028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.952055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.952192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.952407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.952433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.952621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.952649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.952793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.952821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.543 [2024-07-25 07:32:25.952995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.543 [2024-07-25 07:32:25.953023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.543 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.953180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.953205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.953326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37230 is same with the state(5) to be set 00:26:53.544 [2024-07-25 07:32:25.953513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.953579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.953760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.953788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.953948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.953975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.954112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.954141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.954308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.954335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.954492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.954519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.954688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.954715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.954875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.954901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.955026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.955051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.955181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.955207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.955342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.955368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.955527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.955560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.955717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.955743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.955859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.955885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.956020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.956045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.956193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.956219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.956354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.956378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.956509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.956544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.956699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.956724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.956872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.956896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.957024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.957050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.957201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.957226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.957356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.957381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.957516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.957541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.957677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.957703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.957857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.957881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.958028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.958057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.958288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.958331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.958458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.958485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.958650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.958682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.958801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.958825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.958957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.958984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.959118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.959143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.959278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.959304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.544 qpair failed and we were unable to recover it. 00:26:53.544 [2024-07-25 07:32:25.959457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.544 [2024-07-25 07:32:25.959481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.959610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.959635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.959767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.959792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.959946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.959970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.960130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.960155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.960296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.960322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.960456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.960480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.960608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.960632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.960761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.960786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.960920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.960945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.961104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.961130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.961292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.961318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.961441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.961466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.961630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.961655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.961805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.961830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.961962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.961987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.962140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.962166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.962309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.962335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.962481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.962506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.962668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.962693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.962811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.962836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.962960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.962985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.963136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.963165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.963297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.963322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.963443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.963469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.963622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.963804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.963829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.963980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.964004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.964155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.964184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.964367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.964393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.964515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.964547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.964700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.964726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.964875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.964900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.965053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.965077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.965205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.965230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.965369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.965393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.965570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.965596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.965756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.545 [2024-07-25 07:32:25.965782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.545 qpair failed and we were unable to recover it. 00:26:53.545 [2024-07-25 07:32:25.965912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.966090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.966115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.966290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.966319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.966510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.966538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.966681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.966706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.966886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.966911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.967053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.967079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.967285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.967327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.967481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.967507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.967623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.967648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.967798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.967824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.967953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.967982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.968125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.968150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.968282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.968307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.968442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.968467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.968607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.968632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.968760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.968785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.968938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.968962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.969094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.969120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.969249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.969275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.969408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.969433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.969556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.969581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.969701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.969725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.969890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.969915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.970034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.970060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.970189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.970214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.970340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.970366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.970532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.970556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.970684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.970709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.970837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.970863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.971013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.971037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.971183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.971208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.971377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.971404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.971544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.971569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.971728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.971753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.971936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.971961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.972091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.546 [2024-07-25 07:32:25.972115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.546 qpair failed and we were unable to recover it. 00:26:53.546 [2024-07-25 07:32:25.972231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.972262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.972411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.972436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.972604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.972778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.972803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.972934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.972961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.973113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.973138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.973294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.973319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.973442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.973468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.973643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.973667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.973787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.973812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.973993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.974019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.974172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.974197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.974356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.974381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.974532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.974557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.974678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.974704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.974857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.974881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.975030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.975055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.975178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.975202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.975359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.975384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.975516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.975541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.975673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.975697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.975831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.975855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.976026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.976051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.976197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.976225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.976371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.976396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.976545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.976573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.976764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.976792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.976951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.976975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.977146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.977174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.977336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.977362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.977515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.977539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.977661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.547 [2024-07-25 07:32:25.977686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.547 qpair failed and we were unable to recover it. 00:26:53.547 [2024-07-25 07:32:25.977812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.977837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.977995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.978019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.978202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.978226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.978362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.978387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.978517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.978541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.978689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.978714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.978840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.978865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.979045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.979069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.979251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.979294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.979447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.979472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.979619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.979773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.979798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.979926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.979951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.980103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.980128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.980255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.980280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.980439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.980465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.980612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.980636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.980777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.980803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.980964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.980989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.981164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.981189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.981326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.981352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.981530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.981555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.981694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.981719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.981838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.981862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.982041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.982065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.982185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.982210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.982352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.982378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.982529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.982554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.982701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.982726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.982885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.982910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.983089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.983113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.983245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.983270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.983399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.983424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.983547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.983572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.983692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.983716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.983867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.983892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.984026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.548 [2024-07-25 07:32:25.984051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.548 qpair failed and we were unable to recover it. 00:26:53.548 [2024-07-25 07:32:25.984175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.984203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.984344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.984369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.984493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.984520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.984675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.984700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.984854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.984881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.985042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.985067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.985215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.985240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.985406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.985431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.985585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.985610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.985734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.985759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.985929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.985955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.986125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.986152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.986313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.986340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.986494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.986520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.986656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.986681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.986823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.986848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.987000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.987026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.987187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.987212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.987339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.987365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.987496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.987522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.987673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.987698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.987852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.987877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.988045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.988071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.988220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.988250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.988413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.988438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.988590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.988615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.988772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.988797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.988947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.988971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.989151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.989180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.989406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.989433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.989550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.989576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.989697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.989723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.989846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.989871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.989997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.990022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.990179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.990207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.990369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.990394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.990515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.990540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.990670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.549 [2024-07-25 07:32:25.990696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.549 qpair failed and we were unable to recover it. 00:26:53.549 [2024-07-25 07:32:25.990863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.990888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.991039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.991063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.991258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.991300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.991457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.991483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.991643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.991668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.991795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.991822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.991972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.991997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.992146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.992171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.992367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.992393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.992516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.992540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.992669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.992696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.992824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.992849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.993000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.993025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.993177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.993202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.993338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.993364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.993488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.993512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.993661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.993686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.993873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.993902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.994127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.994155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.994376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.994404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.994555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.994579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.994702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.994727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.994883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.994908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.995052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.995080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.995230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.995262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.995385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.995411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.995541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.995566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.995688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.995714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.995874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.995899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.996053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.996079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.996229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.996265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.996397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.996421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.996542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.996567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.996687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.996712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.996867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.996892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.997021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.997047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.997199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.997224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.997403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.997429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.997598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.997623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.997755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.550 [2024-07-25 07:32:25.997780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.550 qpair failed and we were unable to recover it. 00:26:53.550 [2024-07-25 07:32:25.997933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.998087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.998113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.998253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.998279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.998407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.998431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.998561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.998587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.998712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.998736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.998893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.998918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.999077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.999102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.999221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.999253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.999433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.999458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.999608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.999634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.999790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.999814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:25.999969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:25.999993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.000150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.000176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.000339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.000366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.000495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.000530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.000663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.000690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.000838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.000867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.001015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.001040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.001166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.001191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.001328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.001353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.001500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.001525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.001648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.001674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.001823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.001846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.002013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.002040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.002180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.002206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.002396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.002421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.002553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.002577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.002724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.002753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.002954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.002982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.003147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.003172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.003337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.003363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.003515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.003541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.003688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.003716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.003861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.551 [2024-07-25 07:32:26.003886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.551 qpair failed and we were unable to recover it. 00:26:53.551 [2024-07-25 07:32:26.004016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.004041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.004195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.004220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.004369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.004394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.004537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.004565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.004734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.004763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.004912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.004936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.005059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.005084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.005259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.005288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.005458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.005485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.005652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.005684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.005859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.005887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.006059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.006085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.006236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.006282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.006451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.006478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.006679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.006707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.006872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.006897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.007064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.007088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.007212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.007237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.007397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.007424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.007612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.007640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.007833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.007880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.008034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.008058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.008211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.008247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.008430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.008458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.008638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.008665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.008860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.008890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.009058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.009083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.009236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.009274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.009448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.009475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.009682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.009732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.009894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.009922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.010072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.010098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.010250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.010292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.010439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.010463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.010607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.010634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.010808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.010833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.010953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.010977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.011119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.011144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.011318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.011346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.552 [2024-07-25 07:32:26.011543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.552 [2024-07-25 07:32:26.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.552 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.011729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.011755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.011954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.011979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.012137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.012162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.012337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.012365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.012534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.012561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.012774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.012802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.012971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.012996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.013130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.013154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.013311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.013340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.013556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.013584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.013757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.013785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.013973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.013999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.014117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.014141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.014314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.014343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.014532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.014561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.014753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.014781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.014948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.014973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.015138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.015162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.015312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.015340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.015504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.015532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.015731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.015759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.015925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.015952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.016107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.016132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.016266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.016293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.016449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.016473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.016634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.016659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.016833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.016858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.016982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.017006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.553 [2024-07-25 07:32:26.017164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.553 [2024-07-25 07:32:26.017189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.553 qpair failed and we were unable to recover it. 00:26:53.837 [2024-07-25 07:32:26.017326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.017370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.017541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.017570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.017757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.017785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.017932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.017957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.018090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.018115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.018282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.018311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.018454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.018479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.018633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.018658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.018835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.018865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.018992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.019017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.019146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.019171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.019304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.019329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.019449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.019474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.019632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.019656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.019825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.019852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.019997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.020021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.020146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.020172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.020363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.020391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.020557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.020585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.020748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.020777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.020948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.020973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.021108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.021132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.021288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.021317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.021495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.021522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.021666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.021693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.021862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.021889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.022034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.022060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.022194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.022218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.022392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.022419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.022584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.022612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.022802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.022830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.022966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.022992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.023159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.023183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.023359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.023388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.023542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.023570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.023752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.023784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.023952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.023980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.838 [2024-07-25 07:32:26.024126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.838 [2024-07-25 07:32:26.024150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.838 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.024311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.024353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.024574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.024602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.024772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.024799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.024970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.024995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.025153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.025178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.025323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.025351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.025577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.025627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.025782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.025809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.025975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.025999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.026128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.026153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.026330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.026355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.026487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.026511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.026672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.026698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.026849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.026873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.027962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.027987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.028161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.028190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.028370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.028396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.028525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.028551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.028729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.028755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.028901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.028927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.029079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.029108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.029263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.029290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.029470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.029495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.029624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.029649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.029775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.029801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.029979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.030003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.030135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.030160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.030314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.030340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.030495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.030520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.030685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.030710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.030860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.030885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.839 qpair failed and we were unable to recover it. 00:26:53.839 [2024-07-25 07:32:26.031042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.839 [2024-07-25 07:32:26.031067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.031223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.031277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.031429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.031459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.031606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.031634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.031761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.031798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.031982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.032019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.032175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.032213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.032378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.032405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.032585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.032611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.032731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.032756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.032908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.032932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.033076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.033104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.033269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.033295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.033419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.033445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.033611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.033636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.033796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.033821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.033984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.034009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.034161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.034187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.034326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.034352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.034499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.034524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.034661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.034687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.034843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.034867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.035012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.035038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.035165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.035191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.035348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.035375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.035520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.035547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.035751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.035780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.035941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.036130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.036158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.036337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.036366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.036524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.036552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.036721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.036750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.036984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.037014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.037205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.037234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.037543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.037573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.037773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.037799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.037928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.037953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.038128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.038156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.840 qpair failed and we were unable to recover it. 00:26:53.840 [2024-07-25 07:32:26.038318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.840 [2024-07-25 07:32:26.038344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.038469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.038494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.038641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.038666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.038795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.038821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.039003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.039029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.039196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.039225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.039379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.039404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.039547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.039572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.039722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.039747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.039900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.039925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.040098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.040126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.040269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.040295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.040421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.040447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.040604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.040629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.040760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.040785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.040970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.040996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.041120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.041146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.041309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.041339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.041465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.041489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.041652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.041676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.041828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.041853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.042030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.042213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.042363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.042517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.042682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.042860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.042983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.043008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.043134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.043159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.043311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.043349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.043498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.043526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.043667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.043694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.043838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.043864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.044002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.044028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.044183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.044209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.044365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.044391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.044514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.044539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.044676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.044716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.841 qpair failed and we were unable to recover it. 00:26:53.841 [2024-07-25 07:32:26.044889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.841 [2024-07-25 07:32:26.044913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.045096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.045125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.045262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.045289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.045445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.045472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.045635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.045661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.045799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.045824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.045950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.045980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.046161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.046190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.046366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.046595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.046621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.046750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.046776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.046937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.046962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.047087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.047237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.047269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.047432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.047456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.047602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.047628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.047779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.047805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.047933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.047957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.048085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.048110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.048234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.048265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.048419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.048443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.048570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.048595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.048723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.048747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.048918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.048942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.049120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.049144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.049288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.049314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.049478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.049503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.049655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.049679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.049865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.049890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.050045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.050069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.050190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.050215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.842 [2024-07-25 07:32:26.050342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.842 [2024-07-25 07:32:26.050367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.842 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.050516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.050540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.050664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.050693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.050820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.050844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.050969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.050993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.051111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.051135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.051295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.051321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.051451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.051476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.051618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.051642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.051769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.051794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.051940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.051965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.052089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.052113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.052239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.052281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.052406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.052431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.052588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.052612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.052736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.052761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.052918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.052942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.053094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.053118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.053277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.053303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.053452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.053476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.053639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.053663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.053824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.053850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.053981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.054129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.054286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.054437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.054639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.054824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.054970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.054994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.055123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.055153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.055312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.055338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.055487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.055511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.055699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.055724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.055876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.055901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.056019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.056043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.056172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.056196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.056352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.056378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.056495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.056519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.843 [2024-07-25 07:32:26.056698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.843 [2024-07-25 07:32:26.056723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.843 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.056875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.056900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.057053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.057077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.057231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.057262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.057419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.057444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.057579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.057603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.057758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.057782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.057912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.057938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.058064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.058088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.058211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.058236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.058414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.058439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.058599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.058623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.058769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.058794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.058955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.058980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.059121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.059145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.059272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.059298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.059454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.059479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.059608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.059632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.059778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.059803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.059961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.059987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.060165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.060189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.060318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.060344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.060514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.060540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.060671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.060695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.060846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.061049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.061077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.061216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.061247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.061376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.061401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.061531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.061556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.061710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.061735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.061862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.061886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.062047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.062072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.062215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.062276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.062437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.062466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.062670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.062699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.062894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.062922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.063080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.063106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.063314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.063342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.844 [2024-07-25 07:32:26.063509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.844 [2024-07-25 07:32:26.063537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.844 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.063723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.063752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.063950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.063978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.064159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.064186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.064351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.064379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.064570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.064599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.064770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.064797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.064985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.065013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.065204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.065232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.065457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.065486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.065668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.065695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.065860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.065897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.066102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.066128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.066294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.066322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.066515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.066543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.066730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.066758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.066963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.066990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.067200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.067230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.067396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.067421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.067548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.067573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.067737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.067883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.067912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.068066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.068091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.068248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.068274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.068429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.068454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.068586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.068611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.068770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.068795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.068913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.068937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.069090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.069115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.069282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.069309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.069433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.069457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.069589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.069613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.069768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.069795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.069928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.069953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.070130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.070154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.070294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.070321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.070444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.070468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.070621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.070645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.070797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.845 [2024-07-25 07:32:26.070822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.845 qpair failed and we were unable to recover it. 00:26:53.845 [2024-07-25 07:32:26.070953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.070977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.071132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.071157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.071290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.071316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.071443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.071467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.071604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.071629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.071808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.071833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.071966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.071991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.072123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.072148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.072276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.072302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.072421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.072451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.072604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.072629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.072777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.072802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.072952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.072976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.073094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.073118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.073310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.073336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.073459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.073484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.073665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.073691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.073837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.073862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.074017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.074042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.074172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.074198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.074367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.074393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.074569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.074594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.074775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.074799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.074940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.074965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.075113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.075138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.075291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.075317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.075474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.075500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.075620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.075646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.075767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.075982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.076008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.076189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.076214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.076373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.076399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.076521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.076546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.076701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.076727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.076851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.076875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.077034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.077059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.077184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.077214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.846 [2024-07-25 07:32:26.077375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.846 [2024-07-25 07:32:26.077401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.846 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.077562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.077588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.077741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.077766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.077945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.077969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.078097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.078124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.078304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.078330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.078457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.078482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.078669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.078695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.078853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.078878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.079034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.079059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.079215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.079246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.079404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.079429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.079585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.079610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.079763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.079789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.079962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.079987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.080178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.080203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.080340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.080366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.080523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.080548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.080674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.080698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.080828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.080853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.080974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.081000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.081180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.081205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.081344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.081369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.081550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.081575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.081722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.081747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.081903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.081928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.082073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.082101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.082278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.082304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.082426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.082452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.082603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.082628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.082754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.082779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.082907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.082932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.083085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.847 [2024-07-25 07:32:26.083109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.847 qpair failed and we were unable to recover it. 00:26:53.847 [2024-07-25 07:32:26.083237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.083268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.083393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.083418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.083573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.083712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.083737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.083893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.083919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.084036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.084061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.084237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.084267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.084398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.084425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.084579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.084605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.084763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.084788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.084948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.084974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.085126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.085151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.085309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.085335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.085460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.085485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.085620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.085646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.085821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.085846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.085996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.086148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.086332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.086483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.086635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.086814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.086968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.086992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.087119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.087143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.087294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.087320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.087499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.087524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.087672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.087696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.087874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.087899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.088032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.088058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.088209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.088233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.088356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.088381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.088530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.088555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.088708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.088733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.088866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.088891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.089068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.089097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.089272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.089297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.089471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.089495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.848 [2024-07-25 07:32:26.089671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-07-25 07:32:26.089697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.848 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.089816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.089840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.089991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.090015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.090192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.090219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.090401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.090426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.090579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.090605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.090740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.090765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.090938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.090962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.091105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.091132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.091316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.091343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.091495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.091520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.091653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.091678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.091805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.091830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.091988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.092013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.092187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.092373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.092399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.092528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.092552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.092705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.092732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.092886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.092911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.093066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.093090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.093269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.093294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.093452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.093477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.093638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.093662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.093783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.093807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.093962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.093993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.094179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.094204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.094337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.094362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.094514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.094540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.094723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.094748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.094894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.094918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.095069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.095093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.095272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.095298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.095430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.095455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.095594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.095620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.095785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.095810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.095987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.096012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.096184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.096212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.096371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.096396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-25 07:32:26.096561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-25 07:32:26.096586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.096737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.096761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.096911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.096935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.097107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.097136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.097293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.097319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.097468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.097493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.097649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.097673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.097804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.097829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.097986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.098011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.098191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.098215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.098408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.098434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.098588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.098613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.098748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.098773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.098922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.098948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.099103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.099128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.099256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.099282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.099462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.099487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.099645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.099669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.099824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.099848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.099963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.099988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.100135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.100159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.100310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.100335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.100490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.100516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.100675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.100699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.100842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.100866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.101043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.101068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.101221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.101253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.101409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.101433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.101618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.101643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.101819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.101843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.101992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.102016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.102174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.102199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.102360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.102386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.102561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.102585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.102734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.102759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.102906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.102930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.103104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.103129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.103293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.103319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-25 07:32:26.103502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-25 07:32:26.103527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.103676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.103700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.103854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.103879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.104039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.104064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.104207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.104231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.104394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.104419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.104543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.104568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.104718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.104742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.104875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.104901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.105075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.105100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.105247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.105272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.105421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.105446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.105593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.105618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.105747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.105771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.105928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.105953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.106102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.106127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.106283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.106312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.106467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.106493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.106661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.106687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.106809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.106833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.107011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.107036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.107170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.107196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.107328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.107354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.107533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.107561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.107719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.107746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.107951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.107976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.108121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.108146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.108301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.108326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.108506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.108531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.108662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.108687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.108847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.108873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.109024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.109050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.109182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.851 [2024-07-25 07:32:26.109209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.851 qpair failed and we were unable to recover it. 00:26:53.851 [2024-07-25 07:32:26.109377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.109403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.109559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.109584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.109717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.109743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.109918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.109943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.110094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.110119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.110272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.110299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.110425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.110450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.110604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.110629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.110765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.110790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.110940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.110964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.111090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.111118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.111294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.111320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.111474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.111499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.111643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.111795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.111820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.111998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.112023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.112149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.112174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.112314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.112339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.112468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.112492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.112677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.112701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.112855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.112881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.113052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.113077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.113231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.113275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.113432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.113459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.113614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.113639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.113812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.113841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.113988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.114013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.114162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.114187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.114316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.114342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.114494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.114520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.114643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.114667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.114845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.114869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.115020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.115046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.115170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.115194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.115348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.115374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.115558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.115583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.115732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.115756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.852 [2024-07-25 07:32:26.115932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.852 [2024-07-25 07:32:26.115960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.852 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.116105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.116133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.116268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.116294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.116448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.116472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.116637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.116662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.116810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.116835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.117007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.117032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.117188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.117213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.117394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.117419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.117568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.117593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.117739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.117764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.117914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.117938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.118112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.118137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.118292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.118318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.118497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.118522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.118644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.118667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.118815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.118840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.118987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.119011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.119140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.119165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.119302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.119328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.119479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.119503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.119655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.119679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.119836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.119862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.120015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.120040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.120204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.120228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.120371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.120397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.120545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.120570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.120701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.120725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.120862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.120887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.121064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.121089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.121258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.121283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.121413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.121439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.121556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.121581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.121727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.121751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.121939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.121964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.122111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.122136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.122316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.122342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.122494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.853 [2024-07-25 07:32:26.122519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.853 qpair failed and we were unable to recover it. 00:26:53.853 [2024-07-25 07:32:26.122672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.122699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.122850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.122876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.123057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.123082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.123270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.123296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.123453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.123478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.123660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.123686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.123835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.123859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.124008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.124033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.124185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.124211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.124333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.124358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.124502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.124527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.124683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.124709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.124891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.124916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.125043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.125191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.125217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.125371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.125396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.125586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.125610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.125769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.125795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.125950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.125974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.126154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.126177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.126335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.126360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.126489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.126513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.126664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.126689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.126893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.126958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.127147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.127175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.127378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.127403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.127555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.127580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.127736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.127761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.127935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.127960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.128117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.128296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.128325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.128476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.128502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.128638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.128662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.128816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.128841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.128961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.128986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.129119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.129144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.129273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.129299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.129442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.854 [2024-07-25 07:32:26.129466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.854 qpair failed and we were unable to recover it. 00:26:53.854 [2024-07-25 07:32:26.129622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.129646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.129801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.129826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.129939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.129963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.130119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.130143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.130294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.130320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.130465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.130489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.130644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.130668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.130820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.130845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.130974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.130999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.131133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.131160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.131310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.131335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.131495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.131520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.131674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.131701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.131832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.131858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.132036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.132060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.132209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.132234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.132356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.132381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.132536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.132560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.132687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.132712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.132885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.132914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.133040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.133064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.133210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.133238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.133429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.133454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.133575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.133601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.133756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.133782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.133937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.133962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.134114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.134139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.134264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.134290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.134475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.134500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.134612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.134636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.134823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.134847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.135023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.135047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.135200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.135226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.135396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.135421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.135593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.135617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.135771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.135796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.135953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.135978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.855 [2024-07-25 07:32:26.136124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.855 [2024-07-25 07:32:26.136149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.855 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.136317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.136343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.136498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.136523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.136650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.136674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.136847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.136872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.137058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.137083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.137261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.137286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.137440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.137464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.137591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.137616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.137739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.137764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.137920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.137944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.138091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.138116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.138269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.138294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.138443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.138467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.138624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.138649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.138770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.138795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.138918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.138944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.139101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.139127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.139253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.139279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.139439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.139465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.139623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.139648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.139823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.139848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.139981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.140005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.140163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.140188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.140357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.140382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.140558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.140583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.140712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.140736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.140894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.140918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.141037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.141061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.141260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.141286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.141441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.141465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.141614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.141639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.141821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.141847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.856 [2024-07-25 07:32:26.141973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.856 [2024-07-25 07:32:26.141997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.856 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.142195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.142223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.142387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.142412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.142592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.142617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.142783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.142807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.142961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.142985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.143159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.143188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.143343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.143369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.143504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.143529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.143648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.143673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.143823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.143848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.143996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.144021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.144169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.144193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.144343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.144368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.144532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.144741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.144765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.144913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.144937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.145088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.145122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.145295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.145320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.145474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.145499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.145649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.145674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.145794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.145819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.145953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.145976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.146157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.146182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.146306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.146332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.146481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.146505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.146656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.146681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.146826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.146851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.147005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.147029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.147189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.147215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.147347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.147373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.147529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.147554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.147705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.147730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.147906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.147931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.148133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.148160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.148332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.148358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.148511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.148535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.148683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.148708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.857 qpair failed and we were unable to recover it. 00:26:53.857 [2024-07-25 07:32:26.148829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.857 [2024-07-25 07:32:26.148853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.148977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.149001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.149151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.149179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.149334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.149359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.149510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.149535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.149687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.149712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.149865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.149895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.150072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.150100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.150270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.150295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.150423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.150449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.150627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.150652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.150797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.150822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.150971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.150997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.151149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.151173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.151303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.151328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.151482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.151508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.151685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.151710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.151861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.151885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.152014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.152040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.152193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.152217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.152356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.152380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.152515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.152541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.152722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.152746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.152868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.152893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.153043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.153068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.153203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.153228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.153392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.153417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.153567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.153592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.153734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.153758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.153902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.153926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.154118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.154146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.154295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.154321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.154466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.154491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.154618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.154647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.154827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.154853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.154977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.155001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.155125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.155151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.155296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.858 [2024-07-25 07:32:26.155322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.858 qpair failed and we were unable to recover it. 00:26:53.858 [2024-07-25 07:32:26.155447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.155472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.155650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.155675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.155790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.155815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.155940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.155964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.156148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.156173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.156333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.156359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.156486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.156510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.156670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.156695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.156843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.156867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.157027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.157051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.157209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.157234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.157418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.157443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.157590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.157614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.157772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.157797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.157946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.157971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.158096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.158121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.158250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.158276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.158428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.158453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.158575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.158599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.158722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.158748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.158923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.158948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.159124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.159148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.159304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.159330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.159487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.159513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.159638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.159662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.159814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.159840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.160016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.160041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.160216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.160247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.160399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.160423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.160600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.160626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.160776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.160801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.160933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.160959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.161113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.161138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.161315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.161340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.161496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.161521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.161678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.161702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.161866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.161895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.162074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.859 [2024-07-25 07:32:26.162100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.859 qpair failed and we were unable to recover it. 00:26:53.859 [2024-07-25 07:32:26.162229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.162261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.162438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.162462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.162622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.162647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.162774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.162798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.162959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.162983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.163158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.163186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.163379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.163406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.163559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.163584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.163767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.163791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.163934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.163958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.164111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.164136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.164272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.164297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.164426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.164451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.164578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.164602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.164786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.164810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.164957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.164982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.165139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.165164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.165323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.165348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.165495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.165519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.165695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.165720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.165874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.165900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.166057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.166081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.166235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.166265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.166403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.166428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.166553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.166697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.166726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.166904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.166929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.167079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.167103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.167232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.167271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.167394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.167419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.167550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.167575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.167729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.167755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.167878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.167903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.168084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.168109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.168264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.168290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.168417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.168442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.168586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.168610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.168731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.168755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.860 qpair failed and we were unable to recover it. 00:26:53.860 [2024-07-25 07:32:26.168908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.860 [2024-07-25 07:32:26.168932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.169090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.169115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.169268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.169293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.169447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.169471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.169602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.169627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.169783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.169807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.169959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.169984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.170135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.170159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.170288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.170315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.170464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.170488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.170636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.170661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.170836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.170861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.171015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.171040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.171197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.171221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.171405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.171434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.171592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.171617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.171769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.171794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.171969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.171993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.172148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.172173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.172329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.172355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.172510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.172534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.172713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.172738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.172892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.172916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.173065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.173089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.173268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.173293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.173470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.173495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.173615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.173639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.173818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.173842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.173975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.174000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.174148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.174173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.174357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.174382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.174538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.174563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.174722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.174747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.174876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.861 [2024-07-25 07:32:26.174901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.861 qpair failed and we were unable to recover it. 00:26:53.861 [2024-07-25 07:32:26.175054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.175257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.175283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.175440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.175466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.175595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.175620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.175808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.175834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.175991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.176016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.176138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.176163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.176338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.176364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.176518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.176544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.176697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.176722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.176878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.176902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.177082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.177108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.177238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.177270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.177441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.177465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.177623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.177648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.177802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.177827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.178009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.178034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.178166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.178190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.178353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.178379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.178529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.178553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.178711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.178735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.178917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.178942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.179091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.179116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.179264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.179289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.179435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.179459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.179583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.179607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.179734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.179759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.179880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.179905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.180058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.180082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.180240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.180272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.180429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.180454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.180602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.180626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.180786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.180811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.180986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.181011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.181146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.181171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.181332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.181357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.181486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.181510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.862 [2024-07-25 07:32:26.181662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.862 [2024-07-25 07:32:26.181687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.862 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.181837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.181861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.182013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.182037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.182159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.182183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.182342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.182368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.182496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.182520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.182668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.182693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.182851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.182875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.183029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.183055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.183185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.183211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.183399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.183424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.183573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.183602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.183715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.183739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.183872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.183896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.184049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.184074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.184219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.184258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.184418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.184443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.184592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.184617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.184767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.184792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.184951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.184975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.185119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.185146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.185310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.185336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.185516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.185724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.185748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.185861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.185886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.186070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.186094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.186270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.186296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.186447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.186471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.186631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.186658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.186783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.186808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.186953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.186977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.187167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.187192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.187348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.187374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.187499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.187523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.187697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.187723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.187870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.187895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.188018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.188042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.188190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.188215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.188373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.863 [2024-07-25 07:32:26.188402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.863 qpair failed and we were unable to recover it. 00:26:53.863 [2024-07-25 07:32:26.188557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.188581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.188729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.188754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.188907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.188933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.189081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.189105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.189264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.189289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.189465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.189490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.189639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.189663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.189817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.189842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.189990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.190015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.190134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.190158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.190327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.190352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.190501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.190525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.190677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.190702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.190861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.190886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.191031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.191055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.191183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.191209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.191368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.191393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.191538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.191563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.191710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.191735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.191890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.191915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.192078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.192106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.192278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.192304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.192433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.192458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.192612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.192637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.192784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.192809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.192955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.192980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.193110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.193139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.193290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.193317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.193448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.193473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.193596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.193621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.193774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.193798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.193975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.194000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.194154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.194179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.194331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.194356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.194536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.194561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.194736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.194760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.194916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.194941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.864 [2024-07-25 07:32:26.195094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.864 [2024-07-25 07:32:26.195119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.864 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.195240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.195271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.195403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.195430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.195591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.195615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.195790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.195814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.195966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.195991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.196148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.196174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.196327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.196352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.196503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.196527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.196683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.196708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.196892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.196917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.197092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.197116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.197252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.197277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.197456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.197481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.197635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.197660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.197790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.197815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.197990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.198016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.198164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.198189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.198350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.198376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.198531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.198555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.198705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.198731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.198888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.198914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.199058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.199082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.199253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.199278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.199461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.199489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.199684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.199712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.199910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.199936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.200104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.200131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.200320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.200349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.200520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.200548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.200745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.200778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.200943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.200970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.201156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.201183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.865 qpair failed and we were unable to recover it. 00:26:53.865 [2024-07-25 07:32:26.201330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.865 [2024-07-25 07:32:26.201371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.201514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.201539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.201689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.201713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.201866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.201892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.202962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.202986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.203193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.203219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.203374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.203399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.203550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.203576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.203707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.203733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.203915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.203941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.204092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.204117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.204267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.204293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.204453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.204478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.204605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.204630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.204766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.204791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.204941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.204966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.205081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.205106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.205254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.205279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.205405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.205434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.205587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.205612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.205771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.205795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.205946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.205971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.206090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.206114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.206296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.206320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.206460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.206486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.206611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.206635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.206765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.206790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.206917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.206942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.207090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.207119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.207271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.207296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.207460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.207485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.207618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.866 [2024-07-25 07:32:26.207641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.866 qpair failed and we were unable to recover it. 00:26:53.866 [2024-07-25 07:32:26.207770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.207794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.207974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.207999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.208153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.208306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.208331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.208466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.208492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.208648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.208672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.208842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.208866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.208991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.209139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.209297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.209473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.209621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.209770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.209921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.209949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.210073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.210098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.210274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.210299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.210484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.210509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.210667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.210692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.210823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.210849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.211027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.211052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.211191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.211215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.211375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.211400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.211527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.211552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.211702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.211727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.211852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.211878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.212060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.212085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.212238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.212270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.212428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.212453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.212617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.212643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.212766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.212791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.212946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.212970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.213102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.213126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.213257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.213284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.213433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.213457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.213613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.213638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.213798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.213824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.213958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.867 [2024-07-25 07:32:26.213982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.867 qpair failed and we were unable to recover it. 00:26:53.867 [2024-07-25 07:32:26.214110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.214136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.214292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.214318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.214441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.214465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.214601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.214626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.214814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.214840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.214988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.215013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.215148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.215173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.215316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.215341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.215521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.215545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.215704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.215729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.215887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.215912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.216081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.216109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.216262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.216288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.216444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.216468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.216624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.216649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.216780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.216805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.216939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.216964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.217096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.217121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.217268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.217304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.217469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.217495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.217621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.217646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.217798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.217824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.217975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.218000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.218143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.218167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.218339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.218365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.218495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.218519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.218642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.218666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.218813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.218839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.218990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.219015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.219145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.219170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.219321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.219347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.219476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.219500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.219674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.219699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.219879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.219904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.220054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.220078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.220239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.220269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.220455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.220480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.220617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.868 [2024-07-25 07:32:26.220642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.868 qpair failed and we were unable to recover it. 00:26:53.868 [2024-07-25 07:32:26.220800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.220825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.220944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.220968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.221121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.221146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.221274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.221300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.221455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.221481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.221630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.221655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.221811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.221840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.221986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.222010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.222138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.222163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.222324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.222350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.222478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.222503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.222661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.222686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.222806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.222830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.222987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.223013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.223167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.223192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.223342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.223367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.223532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.223557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.223687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.223713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.223843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.223867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.224029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.224054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.224209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.224235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.224379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.224403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.224555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.224580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.224744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.224769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.224915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.224939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.225071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.225096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.225225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.225256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.225382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.225406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.225539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.225564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.225716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.225740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.225871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.225896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.226048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.226073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.226224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.226255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.226394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.226422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.226577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.226601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.226753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.226778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.226928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.226952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.227075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.869 [2024-07-25 07:32:26.227101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.869 qpair failed and we were unable to recover it. 00:26:53.869 [2024-07-25 07:32:26.227227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.227275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.227399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.227424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.227584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.227609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.227780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.227804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.227931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.227956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.228111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.228137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.228262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.228288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.228438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.228463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.228627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.228653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.228813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.228838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.228964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.228988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.229163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.229188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.229317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.229341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.229490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.229514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.229653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.229678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.229809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.229833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.229955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.229980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.230133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.230158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.230289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.230319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.230441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.230466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.230625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.230649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.230777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.230802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.230981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.231010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.231193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.231221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.231422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.231451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.231615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.231640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.231825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.231849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.231998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.232022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.232200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.232224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.232361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.232387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.232510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.232534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.232695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.232720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.232881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.232907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.870 qpair failed and we were unable to recover it. 00:26:53.870 [2024-07-25 07:32:26.233041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.870 [2024-07-25 07:32:26.233065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.233197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.233222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.233380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.233405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.233540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.233566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.233727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.233752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.233881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.233906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.234061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.234085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.234230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.234264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.234389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.234414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.234579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.234603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.234760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.234785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.234958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.234984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.235104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.235128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.235313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.235339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.235497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.235522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.235675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.235699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.235822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.235848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.236043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.236198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.236358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.236513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.236671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.236829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.236979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.237004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.237164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.237188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.237345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.237370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.237528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.237553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.237729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.237753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.237913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.237938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.238068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.238094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.238214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.238248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.238389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.238414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.238588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.238616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.238804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.238831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.239038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.239064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.239260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.239288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.239427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.239579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.239603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.871 qpair failed and we were unable to recover it. 00:26:53.871 [2024-07-25 07:32:26.239759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.871 [2024-07-25 07:32:26.239784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.239936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.239961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.240138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.240162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.240318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.240343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.240470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.240495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.240672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.240697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.240868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.240892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.241017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.241043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.241194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.241219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.241404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.241430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.241580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.241608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.241829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.241856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.242017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.242046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.242206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.242233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.242445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.242472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.242728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.242777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.242969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.242995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.243156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.243182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.243376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.243401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.243553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.243581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.243712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.243736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.243883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.243907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.244030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.244071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.244207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.244231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.244411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.244435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.244566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.244590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.244725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.244749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.244877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.244902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.245073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.245102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.245270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.245296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.245456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.245481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.245612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.245638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.245780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.245805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.245969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.245994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.246168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.246193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.246353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.246379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.246505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.246529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.872 qpair failed and we were unable to recover it. 00:26:53.872 [2024-07-25 07:32:26.246663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.872 [2024-07-25 07:32:26.246687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.246843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.246867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.247021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.247046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.247199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.247224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.247388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.247414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.247544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.247568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.247739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.247764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.247895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.247920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.248054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.248079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.248198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.248229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.248399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.248425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.248581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.248606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.248738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.248764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.248917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.248941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.249121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.249145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.249278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.249324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.249502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.249527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.249686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.249710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.249836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.249861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.249984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.250008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.250162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.250186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.250323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.250349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.250502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.250527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.250685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.250709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.250838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.250863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.251018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.251044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.251225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.251256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.251410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.251435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.251590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.251615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.251748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.251773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.251949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.251973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.252123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.252147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.252292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.252318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.252479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.252504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.252665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.252690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.252843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.252868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.253020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.253045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.253187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.253211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.873 qpair failed and we were unable to recover it. 00:26:53.873 [2024-07-25 07:32:26.253370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.873 [2024-07-25 07:32:26.253396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.253517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.253542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.253657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.253681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.253836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.253861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.253983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.254007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.254135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.254160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.254316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.254342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.254476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.254500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.254646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.254669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.254800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.254826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.254983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.255007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.255130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.255154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.255290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.255316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.255490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.255514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.255650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.255674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.255831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.255857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.256007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.256031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.256180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.256204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.256349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.256375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.256527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.256551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.256671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.256697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.256851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.256876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.257000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.257025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.257176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.257201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.257397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.257422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.257573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.257597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.257753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.257778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.257910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.257936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.258086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.258110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.258240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.258272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.258429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.258454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.258608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.258632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.258800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.258824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.258977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.259002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.259123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.259147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.259276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.259305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.259455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.259480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.259647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.874 [2024-07-25 07:32:26.259671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.874 qpair failed and we were unable to recover it. 00:26:53.874 [2024-07-25 07:32:26.259835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.259860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.260970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.260996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.261129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.261153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.261282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.261308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.261438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.261463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.261582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.261606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.261761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.261786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.261936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.261961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.262139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.262166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.262312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.262338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.262460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.262485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.262621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.262646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.262802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.262827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.262969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.262994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.263158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.263185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.263389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.263415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.263569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.263595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.263713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.263737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.263857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.263882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.264039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.264063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.264204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.264231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.264412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.264436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.264563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.264600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.264751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.264775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.264926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.264951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.875 [2024-07-25 07:32:26.265099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.875 [2024-07-25 07:32:26.265127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.875 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.265271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.265301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.265454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.265479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.265610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.265634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.265758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.265782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.265926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.265951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.266112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.266137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.266286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.266311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.266458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.266482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.266635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.266661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.266846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.266871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.267036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.267060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.267191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.267217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.267366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.267392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.267547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.267571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.267726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.267751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.267865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.267890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.268072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.268096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.268276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.268305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.268449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.268472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.268657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.268681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.268832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.268856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.269011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.269036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.269187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.269211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.269388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.269414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.269569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.269593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.269713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.269737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.269890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.269914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.270061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.270085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.270262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.270292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.270443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.270469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.270619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.270644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.270794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.270819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.270999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.271025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.271198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.271222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.271387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.271411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.271543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.271569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.876 [2024-07-25 07:32:26.271727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.876 [2024-07-25 07:32:26.271751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.876 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.271906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.271932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.272114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.272140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.272292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.272319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.272467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.272491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.272659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.272684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.272834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.272859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.272983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.273007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.273144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.273169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.273349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.273375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.273501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.273526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.273681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.273706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.273854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.273879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.274054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.274078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.274209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.274234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.274408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.274433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.274550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.274574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.274704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.274729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.274873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.274897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.275074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.275099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.275222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.275254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.275404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.275428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.275573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.275597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.275751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.275776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.275908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.275932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.276109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.276133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.276287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.276313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.276462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.276487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.276602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.276630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.276811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.276836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.276984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.277009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.277171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.277195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.277314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.277340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.277489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.277513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.277642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.277667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.277802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.277827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.277978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.278003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.278185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.278210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.877 qpair failed and we were unable to recover it. 00:26:53.877 [2024-07-25 07:32:26.278374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.877 [2024-07-25 07:32:26.278399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.278584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.278609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.278762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.278787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.278943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.278968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.279095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.279119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.279272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.279298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.279450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.279475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.279602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.279626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.279776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.279801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.279982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.280007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.280157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.280182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.280314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.280340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.280464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.280489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.280642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.280668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.280801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.280826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.280979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.281004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.281125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.281150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.281271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.281301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.281454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.281479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.281630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.281655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.281804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.281829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.281979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.282125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.282150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.282327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.282353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.282479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.282658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.282684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.282838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.282863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.282998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.283024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.283181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.283206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.283369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.283394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.283553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.283579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.283734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.283759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.283910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.283935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.284088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.284114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.284273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.284299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.284450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.284474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.284596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.284621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.284772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.284796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.284947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.284972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.878 qpair failed and we were unable to recover it. 00:26:53.878 [2024-07-25 07:32:26.285126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.878 [2024-07-25 07:32:26.285152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.285301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.285328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.285480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.285505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.285689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.285715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.285869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.285894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.286039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.286194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.286378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.286528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.286732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.286881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.286997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.287021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.287154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.287180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.287339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.287365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.287514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.287539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.287724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.287749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.287877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.287902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.288079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.288103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.288260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.288286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.288469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.288494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.288647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.288672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.288829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.288855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.288970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.289121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.289146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.289301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.289327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.289472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.289497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.289635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.289660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.289816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.289841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.289956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.289981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.290101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.290127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.290281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.290307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.290427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.290452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.290631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.290655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.290788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.290813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.879 [2024-07-25 07:32:26.290971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.879 [2024-07-25 07:32:26.290996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.879 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.291110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.291135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.291290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.291316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.291465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.291489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.291663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.291688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.291819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.291845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.291999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.292023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.292171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.292196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.292413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.292442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.292606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.292634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.292820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.292847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.293041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.293068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.293291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.293319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.293536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.293563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.293762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.293788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.293922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.293947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.294097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.294123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.294254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.294288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.294469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.294494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.294644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.294668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.294786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.294812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.294932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.294956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.295075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.295099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.295280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.295306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.295437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.295461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.295614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.295637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.295796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.295821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.295949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.295973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.296146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.296171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.296370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.296422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.296592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.296617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.296766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.296791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.296926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.296950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.297081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.297105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.297254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.297279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.297447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.297471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.297605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.297629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.297775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.297799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.297948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.297972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.298145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.298177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.298326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.880 [2024-07-25 07:32:26.298351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.880 qpair failed and we were unable to recover it. 00:26:53.880 [2024-07-25 07:32:26.298507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.298533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.298691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.298718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.298847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.298872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.299026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.299051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.299203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.299228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.299364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.299390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.299543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.299568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.299693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.299718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.299895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.299920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.300076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.300101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.300269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.300311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.300448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.300473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.300631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.300656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.300814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.300838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.300966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.300992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.301151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.301176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.301338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.301363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.301527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.301552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.301707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.301732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.301919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.301944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.302094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.302118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.302267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.302293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.302416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.302440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.302592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.302616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.302776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.302801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.302977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.303006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.303159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.303183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.303342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.303368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.303515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.303540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.303686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.303710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.303868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.303893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.304047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.304074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.304226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.304259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.304433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.304458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.304585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.304610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.304788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.304812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.304974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.304999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.305152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.305176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.305324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.305350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.305532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.305557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.305684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.881 [2024-07-25 07:32:26.305707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.881 qpair failed and we were unable to recover it. 00:26:53.881 [2024-07-25 07:32:26.305833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.305858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.305984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.306009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.306186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.306211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.306347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.306372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.306550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.306575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.306726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.306751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.306904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.306928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.307087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.307112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.307288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.307313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.307444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.307469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.307621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.307646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.307797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.307821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.307974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.307999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.308178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.308203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.308370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.308394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.308554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.308578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.308727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.308752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.308871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.308896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.309045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.309073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.309251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.309277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.309451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.309475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.309627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.309651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.309808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.309833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.309960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.309984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.310100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.310124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.310307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.310333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.310490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.310515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.310634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.310659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.310791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.310816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.310993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.311017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.311167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.311191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.311310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.311335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.311482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.311507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.311690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.311717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.311870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.311897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.312065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.312092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.312259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.312302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.312453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.312479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.312638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.312662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.312795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.312820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.312970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.312996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.313173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.882 [2024-07-25 07:32:26.313198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.882 qpair failed and we were unable to recover it. 00:26:53.882 [2024-07-25 07:32:26.313353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.313379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.313536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.313562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.313711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.313735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.313885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.313908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.314033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.314058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.314174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.314199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.314338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.314364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.314493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.314517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.314654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.314679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.314857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.314881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.315014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.315043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.315192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.315216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.315372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.315580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.315765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.315789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.315966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.315989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.316119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.316145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.316299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.316325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.316476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.316500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.316681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.316707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.316832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.316857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.317013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.317037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.317186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.317212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.317396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.317422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.317600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.317624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.317772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.317797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.317943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.317968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.318112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.318136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.318315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.318341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.318462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.318487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.318628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.318654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.318817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.318845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.319063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.319106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.319289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.319317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.319501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.319528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.319645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.319671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.319793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.319818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.319949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.319981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.320146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.320173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.320307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.320333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.320516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.320541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.320687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.320712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.320839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.320864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.883 [2024-07-25 07:32:26.321000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.883 [2024-07-25 07:32:26.321026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.883 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.321156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.321184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.321347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.321373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.321530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.321556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.321673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.321699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.321839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.321864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.321993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.322019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.322169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.322194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.322330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.322356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.322534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.322559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.322712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.322737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.322862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.322888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.323071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.323097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.323252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.323278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.323395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.323421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.323582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.323607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.323787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.323812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.323957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.323983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.324141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.324167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.324326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.324352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.324509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.324535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.324676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.324702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.324834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.324860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.324986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.325011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.325200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.325225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.325409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.325437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.325623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.325651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.884 [2024-07-25 07:32:26.325891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.884 [2024-07-25 07:32:26.325920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.884 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.326135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.326163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.326335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.326361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.326522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.326547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.326699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.326724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.326882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.326907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.327066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.327092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.327216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.327251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.327408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.327433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.327560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.327586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.327734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.327759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.327912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.327937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.328092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.328122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.328295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.328321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.328490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.328518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.328744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.328772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.328946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.328971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.329120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.329145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.329305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.329331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.329456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.329613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.329638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.329791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.329817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.329950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.329974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.330129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.330154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.330275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.330301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.330455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.330480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.330640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.330665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.330797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.330823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.330998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.331023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.331154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.331179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.331333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.331359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.331509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.331535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.331657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.331684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.331807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.331832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.332026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.332065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.332227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.332268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.332429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.332458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.885 [2024-07-25 07:32:26.332583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.885 [2024-07-25 07:32:26.332607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.885 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.332764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.332789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.332938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.332963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.333129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.333156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.333307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.333333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.333490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.333515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.333696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.333721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.333843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.333868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.334018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.334044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.334191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.334216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.334353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.334383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.334512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.334537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.334688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.334713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.334865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.334890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.335011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.335037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.335189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.335214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.335374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.335402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.335559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.335588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.335784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.335813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.336023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.336051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.336256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.336286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.336479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.336504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.336635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.336660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.336814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.336840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.336968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.336993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.337147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.337172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.337303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.337329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.337455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.337481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.337662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.337688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.337849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.337874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.337993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.338147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.338330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.338478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.338632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.338792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.338974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.338999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.886 qpair failed and we were unable to recover it. 00:26:53.886 [2024-07-25 07:32:26.339152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.886 [2024-07-25 07:32:26.339181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.887 qpair failed and we were unable to recover it. 00:26:53.887 [2024-07-25 07:32:26.339336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.887 [2024-07-25 07:32:26.339362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.887 qpair failed and we were unable to recover it. 00:26:53.887 [2024-07-25 07:32:26.339533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.887 [2024-07-25 07:32:26.339558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.887 qpair failed and we were unable to recover it. 00:26:53.887 [2024-07-25 07:32:26.339736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.887 [2024-07-25 07:32:26.339761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.887 qpair failed and we were unable to recover it. 00:26:53.887 [2024-07-25 07:32:26.339911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.887 [2024-07-25 07:32:26.339937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:53.887 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.340109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.340137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.340305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.340331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.340542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.340567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.340722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.340747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.340876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.340902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.341032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.341057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.341195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.341233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.341397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.341437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.341612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.341640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.341810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.341839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.341960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.341987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.342123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.342151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.342303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.342330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.342451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.342477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.342708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.342735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.342856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.342884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.343009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.174 [2024-07-25 07:32:26.343178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.174 [2024-07-25 07:32:26.343208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.174 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.343370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.343396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.343526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.343551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.343676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.343702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.343822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.343847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.344007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.344033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.344188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.344213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.344346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.344372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.344495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.344521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.344667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.344692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.344836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.344865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.345057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.345087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.345275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.345302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.345455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.345482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.345628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.345654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.345814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.345840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.345995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.346021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.346203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.346228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.346383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.346415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.346571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.346596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.346717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.346742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.346893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.346918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.347046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.347075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.347262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.347289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.347448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.347474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.347632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.347659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.347785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.347812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.347975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.348001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.348155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.348181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.348309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.348344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.348473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.348499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.348680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.348705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.348860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.348885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.349064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.349089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.349213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.349239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.349426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.349453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.349587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.349628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.175 qpair failed and we were unable to recover it. 00:26:54.175 [2024-07-25 07:32:26.349821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.175 [2024-07-25 07:32:26.349846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.349993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.350018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.350164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.350193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.350361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.350386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.350540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.350565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.350687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.350712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.350864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.350889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.351083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.351111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.351297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.351500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.351648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.351674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.351805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.351830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.351980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.352005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.352186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.352211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.352348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.352375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.352527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.352553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.352687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.352712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.352954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.352980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.353100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.353126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.353276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.353302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.353463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.353489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.353638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.353668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.353790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.353816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.353977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.354002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.354129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.354154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.354302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.354328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.354480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.354505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.354630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.354656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.354808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.354837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.355052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.355080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.355248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.355290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.355470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.355495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.355624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.355648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.355825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.355850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.355969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.355994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.356211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.356235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.356463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.356487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-07-25 07:32:26.356690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-07-25 07:32:26.356714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.356864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.356889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.357013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.357038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.357187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.357211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.357369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.357395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.357568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.357593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.357757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.357786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.358051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.358101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.358285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.358310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.358440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.358466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.358614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.358642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.358838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.358868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.359046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.359072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.359198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.359225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.359433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.359461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.359641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.359694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.360004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.360058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.360231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.360265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.360452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.360481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.360702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.360730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.360915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.360944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.361109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.361134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.361306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.361335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.361502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.361529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.361703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.361735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.361899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.361927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.362074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.362099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.362230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.362262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.362443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.362471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.362633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.362661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.362974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.363026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.363197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.363222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.363404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.363431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.363600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.363628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.363902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.363956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.364127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.364152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.364327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-07-25 07:32:26.364352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-07-25 07:32:26.364493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.364522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.364695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.364723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.364883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.364911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.365078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.365103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.365266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.365311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.365485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.365514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.365711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.365863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.365890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.366074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.366099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.366255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.366280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.366455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.366483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.366694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.366722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.366916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.366943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.367118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.367143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.367280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.367325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.367488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.367517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.367717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.367745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.367939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.367968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.368110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.368135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.368302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.368330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.368476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.368501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.368663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.368688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.368862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.368890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.369043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.369069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.369217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.369248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.369371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.369397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.369557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.369582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.369759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.369792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.369929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.369955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.370142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.370168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.370305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.370334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.370533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.370561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.370795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.370824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.371005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.371030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.371153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.371178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.371353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.371382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.371550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-07-25 07:32:26.371578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-07-25 07:32:26.371766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.371793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.371961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.371990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.372158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.372183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.372324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.372353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.372528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.372556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.372715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.372743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.372916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.372944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.373114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.373139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.373273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.373299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.373447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.373472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.373624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.373650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.373801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.373826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.373959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.373984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.374138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.374163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.374320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.374346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.374503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.374529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.374655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.374680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.374831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.374859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.375030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.375058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.375201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.375227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.375361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.375386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.375522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.375547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.375726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.375754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.375904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.375929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.376059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.376086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.376247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.376274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.376399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.376425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.376574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.376599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.376731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.376757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-07-25 07:32:26.376907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-07-25 07:32:26.376936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.377091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.377120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.377253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.377291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.377415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.377440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.377561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.377586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.377715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.377741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.377906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.377931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.378087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.378112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.378275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.378300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.378455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.378481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.378634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.378659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.378781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.378806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.378928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.378954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.379078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.379103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.379258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.379417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.379442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.379594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.379619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.379795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.379824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.380032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.380060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.380226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.380257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.380425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.380450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.380577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.380602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.380749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.380774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.380896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.380921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.381099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.381124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.381258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.381295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.381436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.381461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.381611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.381637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.381767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.381792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.381944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.381969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.382118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.382143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.382293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.382319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.382456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.382482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.382664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.382689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.382815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.382841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.382970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-07-25 07:32:26.382995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-07-25 07:32:26.383148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.383175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.383303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.383329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.383465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.383490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.383618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.383643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.383772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.383798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.383951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.383983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.384142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.384168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.384320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.384346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.384473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.384498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.384675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.384700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.384827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.384852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.384985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.385010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.385133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.385158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.385334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.385360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.385512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.385538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.385691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.385716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.385886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.385914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.386071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.386099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.386236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.386268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.386402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.386427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.386559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.386585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.386702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.386727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.386849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.386875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.387027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.387053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.387182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.387207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.387354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.387379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.387513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.387539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.387698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.387723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.387878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.387904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.388062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.388088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.388222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.388254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.388454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.388480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.388638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.388664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.388815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.388840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.388993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.389018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-07-25 07:32:26.389214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-07-25 07:32:26.389240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.389415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.389441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.389596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.389622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.389750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.389775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.389900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.389927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.390061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.390087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.390209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.390236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.390401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.390426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.390571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.390596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.390728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.390754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.390914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.390944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.391061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.391086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.391212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.391237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.391377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.391404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.391559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.391585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.391712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.391738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.391870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.391895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.392042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.392067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.392232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.392265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.392422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.392447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.392609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.392819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.392844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.392995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.393019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.393185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.393210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.393400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.393426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.393578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.393603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.393753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.393778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.393961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.393987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.394114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.394141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.394316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.394342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.394501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.394527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.394679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.394705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.394880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.394906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.395039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.395064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.395214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.395240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.395375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.395402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.395555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-07-25 07:32:26.395581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-07-25 07:32:26.395712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.395738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.395892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.395917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.396045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.396071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.396201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.396226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.396367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.396393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.396519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.396544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.396664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.396690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.396867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.396892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.397044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.397070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.397203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.397229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.397362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.397387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.397540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.397568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.397755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.397782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.397947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.397980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.398176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.398204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.398386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.398430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.398632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.398662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.398940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.399001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.399225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.399261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.399459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.399488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.399717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.399770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.400002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.400053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.400253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.400297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.400429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.400455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.400602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.400628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.400749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.400775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.400931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.400956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.401088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.401113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.401238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.401275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.401410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.401436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.401591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.401616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.401749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.401774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.401902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.401928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.402078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.402103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.402265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.402292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-07-25 07:32:26.402452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-07-25 07:32:26.402478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.402608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.402635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.402788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.402814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.402942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.402968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.403087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.403113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.403271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.403297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.403478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.403504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.403652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.403677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.403806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.403831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.403981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.404130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.404288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.404444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.404592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.404740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.404921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.404946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.405105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.405130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.405270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.405296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.405452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.405481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.405628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.405653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.405773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.405798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.405934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.405959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.406117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.406142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.406325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.406351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.406483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.406508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.406634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.406659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.406811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.406838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.406996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.407021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.407142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.407167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.407291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.407317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.407450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.407475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.407625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.407650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.407800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.407825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-07-25 07:32:26.407981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-07-25 07:32:26.408006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.408175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.408200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.408361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.408388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.408519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.408544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.408680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.408706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.408857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.408882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.409034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.409059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.409262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.409290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.409432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.409457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.409585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.409611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.409766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.409792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.409943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.409968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.410148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.410173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.410364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.410390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.410537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.410562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.410721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.410746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.410872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.410899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.411015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.411040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.411187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.411213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.411338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.411364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.411496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.411522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.411675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.411701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.411854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.411879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.412005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.412030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.412185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.412211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.412340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.412369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.412520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.412545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.412697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.412723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.412848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.412873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.413051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.413076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.413271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.413313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.413468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.413493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.413644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.413669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.413845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.413870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.414015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.414040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.414198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.414224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-07-25 07:32:26.414396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-07-25 07:32:26.414422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.414574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.414601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.414754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.414780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.414918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.414943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.415092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.415117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.415247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.415273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.415429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.415455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.415633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.415658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.415800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.415825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.415951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.415976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.416101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.416127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.416279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.416305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.416464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.416490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.416645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.416670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.416818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.416844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.417016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.417041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.417175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.417200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.417341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.417367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.417554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.417580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.417705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.417730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.417868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.417894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.418044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.418194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.418350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.418496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.418668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.418846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.418991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.419016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.419132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.419157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.419289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.419321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.419472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.419497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.419642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.419667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.419825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.419849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.420030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.420056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.420205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.420230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.420359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.420384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.420566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-07-25 07:32:26.420591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-07-25 07:32:26.420713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.420739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.420896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.420921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.421068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.421094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.421210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.421235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.421437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.421463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.421616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.421641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.421799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.421824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.421949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.421975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.422155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.422184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.422386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.422412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.422566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.422591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.422747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.422772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.422926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.422951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.423076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.423101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.423279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.423305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.423458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.423484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.423638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.423663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.423815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.423841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.423985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.424010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.424135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.424160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.424321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.424347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.424475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.424500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.424620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.424646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.424826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.424851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.425007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.425032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.425176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.425201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.425359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.425386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.425513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.425539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.425687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.425712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.425866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.425891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.426056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.426084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.426277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.426304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.426480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.426505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.426638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.426663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.426839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.426864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.427020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-07-25 07:32:26.427047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-07-25 07:32:26.427192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.427217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.427338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.427364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.427521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.427547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.427724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.427750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.427900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.427926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.428102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.428127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.428275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.428301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.428457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.428482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.428663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.428689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.428867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.428892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.429074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.429099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.429221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.429258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.429420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.429446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.429601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.429628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.429760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.429786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.429943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.429968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.430113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.430139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.430289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.430315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.430470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.430496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.430676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.430701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.430888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.430913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.431044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.431069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.431183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.431208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.431392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.431422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.431578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.431604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.431781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.431806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.431961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.431987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.432115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.432140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.432291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.432317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.432452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.432478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.432657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.432682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.432802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.432827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-07-25 07:32:26.432988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-07-25 07:32:26.433013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.433143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.433169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.433323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.433349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.433509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.433535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.433686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.433711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.433845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.433870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.434025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.434051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.434204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.434229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.434417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.434442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.434569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.434595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.434748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.434774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.434927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.434952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.435079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.435104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.435260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.435286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.435407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.435432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.435586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.435612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.435733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.435758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.435882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.435907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.436042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.436068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.436220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.436251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.436377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.436403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.436529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.436679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.436705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.436893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.436918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.437043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.437069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.437262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.437288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.437443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.437469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.437624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.437649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.437799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.437824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.437943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.437969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.438129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.438155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.438279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.438308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.438463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.438488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.438647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.438672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.438797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.438823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.438997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.439022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.439173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-07-25 07:32:26.439198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-07-25 07:32:26.439337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.439363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.439516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.439541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.439726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.439751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.439905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.439932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.440081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.440107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.440234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.440266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.440424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.440577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.440602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.440792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.440817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.440968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.440993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.441160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.441184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.441337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.441365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.441539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.441565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.441683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.441708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.441856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.441881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.442055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.442233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.442267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.442426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.442451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.442604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.442630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.442781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.442807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.442932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.442957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.443091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.443116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.443266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.443292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.443444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.443469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.443650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.443675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.443794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.443819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.443937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.443963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.444113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.444139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.444303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.444328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.444482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.444508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.444652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.444677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.444853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.444878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.445005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.445030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.445194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.445219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.445371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.445400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.445526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.445555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.445707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-07-25 07:32:26.445732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-07-25 07:32:26.445881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.445906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.446088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.446113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.446240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.446271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.446390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.446415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.446567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.446592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.446748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.446773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.446896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.446921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.447073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.447099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.447278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.447304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.447483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.447508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.447689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.447715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.447875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.447900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.448048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.448073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.448225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.448256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.448411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.448437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.448589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.448615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.448795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.448820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.448974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.448999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.449118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.449143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.449301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.449327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.449479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.449505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.449623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.449650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.449786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.449812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.449929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.449955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.450142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.450167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.450330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.450357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.450506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.450532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.450659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.450684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.450834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.450859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.450982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.451007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.451130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.451156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.451301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.451327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.451447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.451472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.451637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.451663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.451815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.451841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.451996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.452022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-07-25 07:32:26.452172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-07-25 07:32:26.452200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.452408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.452439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.452569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.452596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.452752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.452777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.452930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.452956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.453126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.453154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.453327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.453352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.453475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.453500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.453656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.453682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.453808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.453834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.453968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.453994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.454143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.454169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.454296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.454323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.454458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.454484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.454615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.454642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.454802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.454827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.454985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.455142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.455289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.455437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.455610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.455758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.455910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.455936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.456081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.456106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.456260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.456286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.456442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.456468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.456620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.456645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.456767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.456793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.456926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.456952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.457106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.457132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.457261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.457287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.457445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.457471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.457627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.457653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.457815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.457840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.457999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.458024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.458176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.458201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.458361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.458387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.458547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-07-25 07:32:26.458573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-07-25 07:32:26.458726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.458751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.458929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.459114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.459139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.459262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.459292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.459416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.459442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.459591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.459616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.459739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.459763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.459912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.459938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.460079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.460104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.460231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.460263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.460441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.460467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.460597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.460623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.460757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.460782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.460915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.460940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.461063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.461088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.461217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.461248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.461377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.461403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.461567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.461593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.461771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.461796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.461917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.461942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.462094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.462122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.462310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.462336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.462461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.462486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.462647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.462672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.462825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.462852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.462999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.463024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.463145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.463171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.463305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.463331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.463462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.463488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.463645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.463670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-07-25 07:32:26.463823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-07-25 07:32:26.463847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.463995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.464021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.464185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.464210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.464366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.464391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.464524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.464549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.464706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.464731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.464859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.464884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.465011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.465037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.465165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.465191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.465374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.465400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.465550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.465575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.465740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.465768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.465937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.465965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.466122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.466156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.466360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.466390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.466584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.466611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.466794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.466822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.467020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.467050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.467214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.467251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.467469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.467497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.467833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.467894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.468082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.468112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.468318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.468347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.468521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.468547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.468679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.468706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.468838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.468864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.468998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.469023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.469179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.469205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.469336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.469362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.469493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.469519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.469677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.469703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.469853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.469878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.470010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.470036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.470171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.470196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.470375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.470401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.470583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.470608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.470758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.194 [2024-07-25 07:32:26.470784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.194 qpair failed and we were unable to recover it. 00:26:54.194 [2024-07-25 07:32:26.470906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.470931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.471107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.471135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.471280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.471305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.471462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.471488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.471658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.471683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.471827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.471852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.471979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.472004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.472156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.472182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.472364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.472390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.472541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.472566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.472747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.472772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.472919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.472944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.473093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.473118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.473279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.473305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.473483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.473508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.473658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.473684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.473844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.473874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.474030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.474056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.474187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.474212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.474401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.474427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.474605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.474630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.474779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.474804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.474962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.474988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.475133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.475159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.475318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.475347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.475659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.475709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.475882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.475908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.476080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.476105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.476285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.476312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.476466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.476491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.476669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.476695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.476823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.476849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.477008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.477033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.477185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.477210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.477341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.477366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.477543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.477568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.477742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.477768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.195 qpair failed and we were unable to recover it. 00:26:54.195 [2024-07-25 07:32:26.477922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-25 07:32:26.477947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.478075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.478101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.478251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.478277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.478435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.478460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.478587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.478612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.478730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.478755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.478920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.478945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.479101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.479127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.479280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.479306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.479455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.479480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.479628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.479653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.479773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.479797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.479924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.479949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.480070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.480095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.480251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.480277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.480431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.480456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.480622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.480648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.480800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.480825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.480980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.481005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.481134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.481165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.481343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.481369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.481527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.481552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.481674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.481699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.481878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.481903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.482027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.482052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.482206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.482231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.482394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.482420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.482555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.482580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.482726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.482751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.482905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.482932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.483108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.483136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.483302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.483328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.483480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.483506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.483662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.483688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.483842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.483869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.484015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.484039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.484171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-25 07:32:26.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.196 qpair failed and we were unable to recover it. 00:26:54.196 [2024-07-25 07:32:26.484356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.484381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.484511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.484536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.484692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.484717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.484897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.485041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.485066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.485231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.485263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.485378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.485403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.485519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.485544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.485701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.485726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.485883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.485908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.486083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.486111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.486310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.486336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.486469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.486495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.486646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.486671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.486826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.486851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.486995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.487020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.487172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.487198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.487360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.487386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.487508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.487534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.487715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.487740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.487898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.487925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.488106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.488131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.488252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.488281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.488450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.488475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.488605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.488630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.488782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.488808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.488957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.488983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.489137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.489162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.489309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.489335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.489492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.489518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.489640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.489667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.489795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.489821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.489957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.489984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.490165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.490190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.490341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.490367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.490525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.490552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.490683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.490709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.197 [2024-07-25 07:32:26.490860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-25 07:32:26.490885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.197 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.491068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.491093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.491251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.491277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.491459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.491485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.491638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.491664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.491782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.491806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.491985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.492010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.492167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.492193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.492354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.492380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.492490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.492515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.492645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.492670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.492829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.492854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.492979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.493004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.493159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.493185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.493319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.493346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.493505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.493530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.493772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.493824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.494018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.494045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.494206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.494235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.494415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.494441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.494619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.494644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.494799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.494824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.494981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.495006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.495154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.495179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.495334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.495360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.495512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.495541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.495663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.495689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.495848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.495872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.495997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.496024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.496177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.496203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.496364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.496390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.496523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.496549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.496666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.198 [2024-07-25 07:32:26.496692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.198 qpair failed and we were unable to recover it. 00:26:54.198 [2024-07-25 07:32:26.496823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.496849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.497007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.497033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.497192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.497216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.497410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.497436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.497563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.497589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.497759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.497787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.497948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.497977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.498171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.498199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.498427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.498456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.498639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.498666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.498884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.498913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.499096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.499124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.499314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.499342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.499573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.499624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.499800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.499825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.499978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.500003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.500160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.500185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.500365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.500391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.500542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.500567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.500722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.500748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.500903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.500928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.501075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.501100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.501254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.501281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.501434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.501459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.501606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.501631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.501807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.501833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.501950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.501975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.502101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.502126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.502283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.502308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.502460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.502485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.502641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.502666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.502822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.502849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.503000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.503031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.503211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.503237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.503370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.503396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.503544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.503569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.199 qpair failed and we were unable to recover it. 00:26:54.199 [2024-07-25 07:32:26.503721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.199 [2024-07-25 07:32:26.503747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.503928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.503953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.504071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.504096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.504212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.504236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.504373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.504399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.504565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.504590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.504730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.504755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.504940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.504965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.505139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.505164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.505279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.505305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.505460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.505485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.505611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.505789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.505814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.506007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.506032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.506190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.506216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.506341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.506504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.506529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.506708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.506733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.506857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.506882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.507002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.507027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.507206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.507231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.507366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.507392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.507517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.507542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.507673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.507699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.507854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.507879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.508066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.508091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.508306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.508331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.508451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.508476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.508632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.508659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.508776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.508802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.508949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.508974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.509129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.509155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.509285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.509311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.509429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.509454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.509591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.509616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.509768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.509793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.509948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.509977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.200 qpair failed and we were unable to recover it. 00:26:54.200 [2024-07-25 07:32:26.510131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.200 [2024-07-25 07:32:26.510156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.510317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.510342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.510521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.510546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.510703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.510728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.510874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.510899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.511069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.511097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.511275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.511302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.511474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.511499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.511658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.511684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.511819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.511845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.511969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.511994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.512173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.512198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.512356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.512382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.512509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.512535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.512683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.512708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.512855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.512881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.513059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.513084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.513238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.513280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.513434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.513459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.513610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.513635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.513784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.513809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.513962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.513987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.514138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.514163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.514342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.514368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.514489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.514514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.514645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.514671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.514823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.514848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.515004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.515031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.515185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.515211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.515400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.515426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.515580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.515606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.515754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.515779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.515952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.515977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.516133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.516159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.516312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.516338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.516485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.516511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.516662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.516687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.201 qpair failed and we were unable to recover it. 00:26:54.201 [2024-07-25 07:32:26.516853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.201 [2024-07-25 07:32:26.516879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.517032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.517058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.517256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.517283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.517434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.517460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.517582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.517609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.517790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.517816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.517969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.517994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.518124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.518149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.518332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.518358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.518480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.518505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.518658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.518683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.518816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.518841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.518991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.519016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.519170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.519195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.519383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.519408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.519537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.519562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.519715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.519740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.519867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.519893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.520072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.520097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.520273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.520298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.520423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.520449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.520630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.520655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.520775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.520801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.520929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.520955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.521110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.521135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.521289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.521314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.521466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.521491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.521635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.521660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.521808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.521833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.521954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.521984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.522137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.522163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.522319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.522345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.522496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.202 [2024-07-25 07:32:26.522521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.202 qpair failed and we were unable to recover it. 00:26:54.202 [2024-07-25 07:32:26.522700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.522726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.522854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.522879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.523011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.523038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.523213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.523239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.523368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.523393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.523572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.523597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.523730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.523755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.523929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.523954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.524139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.524164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.524408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.524437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.524658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.524686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.524854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.524882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.525041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.525070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.525264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.525306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.525463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.525489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.525617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.525643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.525770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.525796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.525979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.526005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.526167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.526191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.526340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.526366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.526526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.526552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.526700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.526725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.526840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.526866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.527031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.527056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.527206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.527231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.527394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.527420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.527597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.527622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.527801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.527826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.527982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.528007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.528170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.528195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.528348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.528374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.528525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.528550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.528708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.528733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.528878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.528903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.529038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.529066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.529265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.529291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.529466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.203 [2024-07-25 07:32:26.529495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.203 qpair failed and we were unable to recover it. 00:26:54.203 [2024-07-25 07:32:26.529646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.529672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.529851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.529877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.530005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.530030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.530186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.530211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.530395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.530422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.530552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.530578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.530700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.530726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.530906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.530931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.531053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.531078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.531199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.531225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.531361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.531387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.531541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.531567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.531690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.531715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.531850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.531875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.532068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.532096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.532234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.532268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.532451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.532476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.532602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.532627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.532779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.532805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.532959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.532985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.533143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.533169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.533320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.533346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.533502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.533527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.533681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.533706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.533859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.533885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.534020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.534046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.534201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.534227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.534420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.534445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.534594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.534619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.534772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.534797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.534940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.534965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.535143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.535168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.535316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.535342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.535472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.535497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.535677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.535702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.535855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.535880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.536031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.536056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.204 qpair failed and we were unable to recover it. 00:26:54.204 [2024-07-25 07:32:26.536210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.204 [2024-07-25 07:32:26.536236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.536389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.536415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.536570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.536599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.536750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.536775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.536916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.536941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.537063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.537089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.537252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.537279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.537396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.537422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.537598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.537622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.537777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.537802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.537926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.537952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.538132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.538157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.538293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.538320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.538500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.538526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.538647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.538673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.538856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.538882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.539040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.539065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.539212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.539237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.539399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.539424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.539604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.539629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.539782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.539807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.539955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.539980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.540097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.540122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.540253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.540280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.540443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.540468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.540613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.540638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.540759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.540784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.540963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.540988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.541164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.541189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.541347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.541374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.541508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.541534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.541693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.541718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.541868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.541892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.542080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.542106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.542224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.542256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.542384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.542409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.542569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.205 [2024-07-25 07:32:26.542595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.205 qpair failed and we were unable to recover it. 00:26:54.205 [2024-07-25 07:32:26.542747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.542772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.542899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.542925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.543045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.543070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.543206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.543231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.543392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.543418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.543543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.543573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.543764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.543789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.543940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.543965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.544120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.544146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.544323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.544349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.544478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.544503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.544658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.544683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.544829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.544854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.545005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.545031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.545156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.545182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.545356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.545382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.545530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.545555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.545710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.545735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.545861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.545886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.546067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.546092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.546253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.546279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.546427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.546453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.546580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.546605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.546763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.546788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.546936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.546961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.547116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.547144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.547317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.547344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.547496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.547521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.547676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.547701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.547833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.547859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.548015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.548041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.548165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.548190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.548321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.548347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.548477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.548502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.548659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.548685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.548865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.548890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.549016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.206 [2024-07-25 07:32:26.549043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.206 qpair failed and we were unable to recover it. 00:26:54.206 [2024-07-25 07:32:26.549208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.549233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.549392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.549417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.549576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.549601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.549751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.549776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.549928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.549954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.550086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.550111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.550247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.550272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.550428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.550453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.550596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.550626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.550775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.550800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.550986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.551011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.551139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.551164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.551340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.551365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.551549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.551575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.551728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.551753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.551870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.551895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.552028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.552054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.552230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.552261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.552393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.552419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.552551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.552577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.552699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.552725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.552845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.552870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.553028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.553053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.553205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.553230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.553391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.553416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.553567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.553593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.553716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.553742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.553894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.553920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.554080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.554106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.554259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.554285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.554442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.554467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.207 [2024-07-25 07:32:26.554597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.207 [2024-07-25 07:32:26.554624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.207 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.554749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.554774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.554928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.554953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.555107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.555133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.555299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.555325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.555480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.555505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.555623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.555648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.555803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.555828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.555985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.556010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.556168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.556193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.556322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.556357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.556538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.556563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.556722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.556747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.556892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.556917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.557072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.557100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.557248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.557274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.557430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.557456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.557608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.557638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.557787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.557812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.557956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.557982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.558139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.558164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.558347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.558372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.558549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.558574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.558705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.558731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.558882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.558907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.559060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.559086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.559247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.559290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.559492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.559518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.559668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.559694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.559826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.559851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.560003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.560030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.560216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.560247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.560378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.560404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.560588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.560613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.560745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.560772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.560893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.560919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.561075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.208 [2024-07-25 07:32:26.561101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.208 qpair failed and we were unable to recover it. 00:26:54.208 [2024-07-25 07:32:26.561268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.561294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.561449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.561476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.561660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.561685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.561827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.561853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.561980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.562005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.562171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.562197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.562355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.562382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.562507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.562532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.562688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.562714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.562843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.562868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.563017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.563042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.563194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.563219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.563402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.563428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.563579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.563604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.563786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.563812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.563933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.563959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.564129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.564157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.564307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.564333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.564490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.564515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.564672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.564696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.564848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.564877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.565031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.565202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.565389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.565561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.565710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.565871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.565995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.566020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.566148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.566174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.566335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.566361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.566561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.566589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.566781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.566810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.567031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.567060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.567252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.567296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.567434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.567460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.567617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.567643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.567809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.567834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.567962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.209 [2024-07-25 07:32:26.567987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.209 qpair failed and we were unable to recover it. 00:26:54.209 [2024-07-25 07:32:26.568148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.568174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.568294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.568320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.568456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.568632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.568658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.568788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.568814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.568962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.568987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.569141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.569167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.569348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.569374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.569502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.569528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.569687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.569713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.569867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.569892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.570061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.570090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.570262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.570288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.570465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.570490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.570606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.570631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.570789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.570814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.570964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.570990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.571107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.571133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.571285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.571311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.571464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.571489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.571606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.571631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.571776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.571802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.571946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.571975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.572125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.572150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.572300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.572326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.572455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.572480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.572628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.572653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.572834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.572859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.573003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.573029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.573208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.573233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.573395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.573421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.573546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.573701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.573726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.573914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.573940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.574085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.574110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.574292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.574318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.210 [2024-07-25 07:32:26.574476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.210 [2024-07-25 07:32:26.574501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.210 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.574679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.574704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.574857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.574883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.575029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.575055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.575233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.575266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.575421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.575446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.575569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.575594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.575749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.575774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.575920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.575945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.576076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.576102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.576284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.576310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.576426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.576451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.576581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.576606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.576764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.576789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.576977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.577002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.577182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.577207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.577344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.577371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.577553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.577578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.577700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.577725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.577877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.577902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.578077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.578103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.578259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.578286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.578412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.578438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.578590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.578616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.578778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.578804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.578925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.578950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.579127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.579156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.579311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.579337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.579491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.579516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.579633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.579658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.579816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.579965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.580122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.580148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.580332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.580359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.580474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.580499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.580615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.580640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.580823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.580849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.581024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.581049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.581199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.581225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.581400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.211 [2024-07-25 07:32:26.581427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.211 qpair failed and we were unable to recover it. 00:26:54.211 [2024-07-25 07:32:26.581578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.581604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.581761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.581786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.581942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.581967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.582119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.582145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.582290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.582316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.582492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.582518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.582697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.582871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.582897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.583053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.583078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.583259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.583285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.583434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.583459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.583609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.583634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.583786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.583813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.583951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.583977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.584158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.584183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.584314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.584496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.584521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.584670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.584695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.584851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.584877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.585010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.585035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.585197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.585222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.585392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.585430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.585571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.585599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.585733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.585759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.585918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.585944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.586067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.586094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.586221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.586269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.586396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.586424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.586575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.586601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.586727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.586753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.586937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.586965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.587091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.587118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.587276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.587303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.587456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.587482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.587639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.212 [2024-07-25 07:32:26.587664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.212 qpair failed and we were unable to recover it. 00:26:54.212 [2024-07-25 07:32:26.587817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.587844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.587999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.588024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.588148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.588175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.588333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.588359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.588539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.588565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.588751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.588777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.588957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.588982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.589134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.589161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.589282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.589307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.589460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.589486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.589608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.589634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.589787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.589811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.589968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.589994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.590171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.590197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.590331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.590357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.590535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.590560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.590710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.590867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.590893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.591056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.591082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.591233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.591383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.591408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.591534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.591559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.591690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.591715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.591867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.591892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.592075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.592100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.592254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.592280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.592429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.592454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.592608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.592634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.592786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.592812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.592954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.592980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.593154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.593180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.593306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.593336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.593489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.593514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.593693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.593718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.593837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.593862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.594008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.594033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.594181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.594206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.594398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.594424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.594552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.594577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.594727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.594753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.594907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.213 [2024-07-25 07:32:26.594932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.213 qpair failed and we were unable to recover it. 00:26:54.213 [2024-07-25 07:32:26.595056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.595082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.595266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.595295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.595430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.595456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.595634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.595659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.595793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.595818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.595971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.595996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.596146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.596172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.596325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.596351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.596503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.596528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.596707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.596732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.596885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.596910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.597040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.597065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.597218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.597254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.597412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.597438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.597615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.597640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.597817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.597842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.598025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.598051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.598171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.598196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.598410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.598435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.598565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.598590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.598739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.598764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.598922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.598948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.599146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.599174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.599318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.599344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.599520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.599546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.599806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.599855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.600076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.600104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.600277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.600302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.600458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.600484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.600633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.600659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.600780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.600812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.600998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.601024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.601174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.601198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.601359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.601384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.601542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.601567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.601751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.601776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.601929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.601955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.602111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.602137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.602267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.602293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.602449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.602473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.602629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.602655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.602809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.214 [2024-07-25 07:32:26.602835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.214 qpair failed and we were unable to recover it. 00:26:54.214 [2024-07-25 07:32:26.602987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.603012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.603169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.603195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.603359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.603385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.603535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.603560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.603744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.603770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.603894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.603919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.604102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.604130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.604302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.604328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.604477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.604502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.604677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.604702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.604859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.604884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.604998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.605024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.605174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.605200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.605396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.605423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.605576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.605601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.605758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.605783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.605916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.605941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.606066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.606091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.606253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.606279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.606437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.606463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.606613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.606638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.606766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.606791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.606942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.606967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.607137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.607162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.607280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.607306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.607488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.607513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.607671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.607696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.607852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.607877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.608058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.608087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.608213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.608238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.608366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.608392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.608547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.608572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.608724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.608749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.608899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.608923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.609051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.609077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.609231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.609268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.609397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.609422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.609584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.609610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.609790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.609815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.609990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.610016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.610171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.610198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.610367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.610393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.610549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.610574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.610725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.215 [2024-07-25 07:32:26.610750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.215 qpair failed and we were unable to recover it. 00:26:54.215 [2024-07-25 07:32:26.610901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.610926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.611073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.611097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.611226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.611257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.611384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.611410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.611539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.611564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.611745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.611770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.611946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.611971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.612147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.612172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.612326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.612351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.612507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.612534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.612717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.612742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.612895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.612924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.613067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.613096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.613270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.613296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.613446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.613471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.613642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.613668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.613802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.613827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.614009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.614034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.614154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.614179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.614308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.614333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.614516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.614541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.614666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.614691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.614845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.614870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.615019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.615044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.615219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.615258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.615463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.615489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.615672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.615697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.615850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.615875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.616028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.616053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.616186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.616212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.616374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.616400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.616552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.616579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.616761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.616786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.616940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.616965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.617129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.617157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.617332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.617358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.617479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.617504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.216 qpair failed and we were unable to recover it. 00:26:54.216 [2024-07-25 07:32:26.617633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.216 [2024-07-25 07:32:26.617658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.617818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.617845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.618035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.618060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.618212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.618237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.618403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.618428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.618550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.618575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.618705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.618730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.618881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.618905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.619051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.619076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.619221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.619257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.619428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.619453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.619569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.619594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.619739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.619764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.619890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.619915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.620064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.620093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.620247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.620273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.620429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.620455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.620582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.620608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.620751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.620777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.620904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.620930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.621084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.621111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.621276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.621302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.621486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.621512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.621633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.621658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.621836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.621860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.622017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.622042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.622222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.622253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.622386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.622411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.622567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.622592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.622738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.622763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.622888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.622913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.623056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.623085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.623277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.623317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.623440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.623465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.623621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.623646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.623798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.623822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.623974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.623998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.624156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.624180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.624312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.624337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.624491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.624516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.624669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.624694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.624848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.624873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.625024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.625049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.625231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.217 [2024-07-25 07:32:26.625269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.217 qpair failed and we were unable to recover it. 00:26:54.217 [2024-07-25 07:32:26.625423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.625448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.625564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.625589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.625739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.625766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.625951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.625979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.626183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.626210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.626385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.626415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.626627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.626655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.626791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.626819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.627027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.627055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.627239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.627274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.627422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.627452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.627604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.627629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.627757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.627781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.627935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.627960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.628140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.628165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.628316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.628342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.628467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.628493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.628652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.628678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.628829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.628854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.629013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.629038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.629180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.629208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.629446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.629490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.629692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.629724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.630034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.630086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.630335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.630362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.630545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.630571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.630737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.630763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.630944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.630970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.631087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.631113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.631237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.631267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.631421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.631447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.631604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.631629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.631782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.631808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.631992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.632017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.632163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.632192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.632376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.632404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.632586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.632612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.632797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.632823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.632996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.633024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.633229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.633266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.633462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.633488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.633619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.633646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.633801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.218 [2024-07-25 07:32:26.633826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.218 qpair failed and we were unable to recover it. 00:26:54.218 [2024-07-25 07:32:26.633954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.633979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.634133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.634159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.634313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.634341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.634500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.634526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.634684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.634710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.634892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.634917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.635079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.635122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.635348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.635383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.635595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.635656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.635851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.635880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.636081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.636110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.636273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.636316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.636446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.636473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.636605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.636630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.636759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.636785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.636919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.636944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.637128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.637153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.637276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.637302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.637433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.637459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.637617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.637643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.637823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.637848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.638001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.638026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.638207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.638235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.638473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.638501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.638695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.638723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.639019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.639069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.639230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.639265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.639408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.639433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.639617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.639642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.639768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.639793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.639944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.639969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.640155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.640180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.640336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.640361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.640514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.640539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.640668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.640694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.640871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.640896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.641044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.641068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.641247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.641277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.641425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.641450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.641624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.641649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.641780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.641806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.641958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.641983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.642103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.642312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.642338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.642488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.642513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.219 [2024-07-25 07:32:26.642669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.219 [2024-07-25 07:32:26.642694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.219 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.642819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.642845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.642995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.643024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.643203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.643232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.643409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.643438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.643630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.643658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.643821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.643849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.644074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.644101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.644290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.644316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.644447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.644473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.644594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.644619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.644801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.644826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.644975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.645000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.645129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.645154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.645306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.645331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.645484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.645509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.645667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.645692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.645872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.645897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.646096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.646124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.646271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.646297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.646458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.646483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.646640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.646665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.646850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.646876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.647002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.647027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.647204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.647229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.647371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.647396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.647539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.647564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.647714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.647739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.647876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.647900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.648030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.648055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.648200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.648229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.648389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.648414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.648566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.220 [2024-07-25 07:32:26.648591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.220 qpair failed and we were unable to recover it. 00:26:54.220 [2024-07-25 07:32:26.648771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.648797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.648929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.648954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.649097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.649123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.649282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.649307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.649442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.649467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.649614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.649639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.649798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.649825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.649961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.649987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.650157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.650187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.650393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.650424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.650581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.650606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.650731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.650758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.650936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.650961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.651089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.651115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.651251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.651277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.651428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.651453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.651613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.651638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.651790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.651816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.651991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.652016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.652199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.652227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.652424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.652450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.652621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.652649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.653020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.653070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.653263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.653289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.653432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.653458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.653611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.653636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.653792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.653817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.653966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.653991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.654151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.654176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.654334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.654361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.654540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.654566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.654719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.654744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.654880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.654905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.655027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.655053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.655231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.221 [2024-07-25 07:32:26.655266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.221 qpair failed and we were unable to recover it. 00:26:54.221 [2024-07-25 07:32:26.655435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.655460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.655618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.655645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.655832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.655857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.656007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.656032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.656185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.656212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.656374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.656401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.656524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.656551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.656676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.656701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.656888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.656913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.657063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.657104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.657327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.657353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.657536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.657561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.657737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.657763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.657938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.657963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.658146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.658175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.658329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.658354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.658502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.658527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.658687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.658712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.658834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.658860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.659009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.659034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.659201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.659228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.659406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.659431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.659546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.659726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.659750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.659901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.659927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.660078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.660103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.660253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.660278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.660442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.660467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.660618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.660643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.660802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.660827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.660948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.660974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.661129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.661155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.661309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.661334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.661466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.661491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.661633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.661657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.661840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.661865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.661990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.222 [2024-07-25 07:32:26.662015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.222 qpair failed and we were unable to recover it. 00:26:54.222 [2024-07-25 07:32:26.662186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.662214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.662390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.662415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.662588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.662616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.662784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.662812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.663000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.663058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.663195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.663222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.663406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.663451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.663628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.663671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.663882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.663924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.664103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.664128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.664331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.664375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.664514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.664557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.664727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.664770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.664963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.664989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.665145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.665170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.665347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.665391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.665687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.665736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.665938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.665986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.666143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.666170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.666373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.666435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.666614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.666656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.666797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.666839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.667022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.667047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.667228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.667257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.667409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.667452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.667601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.667643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.667822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.667869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.668025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.668050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.668232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.668263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.668444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.668487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.668627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.668671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.668856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.668901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.669079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.669104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.669303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.669332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.669491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.669520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.669723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.669767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.669917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.669962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.223 qpair failed and we were unable to recover it. 00:26:54.223 [2024-07-25 07:32:26.670122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-07-25 07:32:26.670147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.670297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.670342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.670517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.670560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.670743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.670790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.670945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.670970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.671146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.671171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.671358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.671400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.671547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.671575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.671794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.671837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.672029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.672054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.672233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.672264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.672443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.672471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.672693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.672735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.672912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.672955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.673088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.673113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.673269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.673297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.673468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.673512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.673711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.673738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.224 qpair failed and we were unable to recover it. 00:26:54.224 [2024-07-25 07:32:26.673929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-07-25 07:32:26.673971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.674102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.674127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.674256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.674287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.674435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.674480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.674629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.674672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.674840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.674865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.675034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.675059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.675210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.675235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.675397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.675441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.675591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.675636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.675841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.675884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.676038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.676062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.676190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.676217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.676398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.676441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.676589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.676631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.676773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.676816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.676945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.676970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.677098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.677124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.677311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.677337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.677489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.677514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.677669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.677694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.677877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.677903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.678057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.678082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.678236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.678267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.678410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.678452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.678656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.678699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.678905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.678950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.679085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.679111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.506 qpair failed and we were unable to recover it. 00:26:54.506 [2024-07-25 07:32:26.679290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.506 [2024-07-25 07:32:26.679319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.679520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.679562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.679721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.679747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.679932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.679975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.680125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.680150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.680327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.680370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.680520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.680562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.680703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.680745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.680899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.680923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.681072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.681097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.681262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.681289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.681409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.681434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.681586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.681611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.681822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.681864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.681993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.682022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.682151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.682178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.682354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.682397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.682573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.682600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.682792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.682835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.682964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.682990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.683145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.683171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.683325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.683367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.683544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.683592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.683798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.683840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.683965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.683991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.684115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.684140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.684325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.684369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.684545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.684570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.684727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.684752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.684933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.684958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.685118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.685143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.685320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.685362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.685572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.685823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.685865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.686047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.686072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.686253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.686280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.507 qpair failed and we were unable to recover it. 00:26:54.507 [2024-07-25 07:32:26.686482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.507 [2024-07-25 07:32:26.686526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.686696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.686738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.686880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.686922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.687076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.687103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.687301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.687330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.687556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.687598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.687771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.687814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.687969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.687994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.688116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.688141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.688324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.688368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.688546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.688588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.688798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.688839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.689020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.689045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.689176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.689203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.689394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.689440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.689625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.689668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.689835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.689878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.690032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.690057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.690235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.690272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.690453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.690495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.690675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.690723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.690906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.690947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.691097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.691122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.691290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.691318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.691493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.691536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.691692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.691734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.691940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.691968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.692117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.692142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.692312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.692355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.692545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.692587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.692757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.692800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.692915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.692940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.693127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.693152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.693307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.693352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.508 [2024-07-25 07:32:26.693559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.508 [2024-07-25 07:32:26.693602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.508 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.693758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.693800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.693958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.693983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.694166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.694190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.694367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.694411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.694561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.694748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.694791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.694916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.694942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.695101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.695126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.695309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.695336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.695520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.695545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.695731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.695756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.695938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.695964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.696113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.696138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.696311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.696356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.696505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.696550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.696758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.696800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.696953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.696978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.697132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.697157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.697333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.697375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.697529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.697572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.697756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.697799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.697929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.697955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.698109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.698134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.698346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.698395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.698533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.698558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.698709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.698734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.698937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.698980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.699141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.699165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.699317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.699365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.699523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.699567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.699740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.699782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.699935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.699960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.700137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.700163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.700342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.700385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.700565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.700607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.509 [2024-07-25 07:32:26.700753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.509 [2024-07-25 07:32:26.700795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.509 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.700924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.700950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.701116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.701142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.701285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.701313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.701534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.701562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.701730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.701773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.701950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.701976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.702130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.702155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.702332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.702376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.702550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.702591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.702769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.702811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.702964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.702989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.703140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.703166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.703344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.703386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.703590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.703633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.703813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.703857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.704016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.704043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.704172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.704198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.704355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.704400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.704579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.704606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.704777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.704822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.704956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.704983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.705182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.705207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.705372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.705417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.705570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.705613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.705760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.705789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.705994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.706038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.706190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.706214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.706400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.706446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.706615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.706660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.706814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.706859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.707003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.707028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.707180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.707205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.707364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.707393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.707583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.707626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.510 qpair failed and we were unable to recover it. 00:26:54.510 [2024-07-25 07:32:26.707801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.510 [2024-07-25 07:32:26.707829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.707967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.707992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.708176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.708202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.708385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.708414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.708610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.708653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.708799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.708841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.709008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.709034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.709194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.709221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.709388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.709417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.709588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.709615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.709808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.709850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.709976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.710000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.710174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.710199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.710392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.710436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.710616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.710659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.710868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.710909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.711092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.711117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.711294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.711323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.711511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.711554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.711708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.711736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.711926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.711973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.712136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.712160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.712335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.712378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.712529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.712571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.712778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.712821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.712949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.712974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.713130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.713154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.713304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.713334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.511 [2024-07-25 07:32:26.713496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.511 [2024-07-25 07:32:26.713526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.511 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.713717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.713743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.713894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.713918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.714049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.714075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.714226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.714257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.714378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.714404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.714538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.714565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.714735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.714761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.714914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.714939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.715088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.715114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.715268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.715294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.715446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.715473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.715664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.715707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.715833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.715857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.716038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.716063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.716188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.716212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.716397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.716440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.716641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.716685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.716881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.716924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.717065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.717092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.717263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.717290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.717472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.717515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.717708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.717733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.717889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.717915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.718050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.718075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.718232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.718273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.718419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.718463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.718661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.718703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.718889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.718931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.719052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.719076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.719233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.719266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.719449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.719492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.719637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.719684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.719857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.719900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.720055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.720079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.720235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.720266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.720435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.512 [2024-07-25 07:32:26.720478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.512 qpair failed and we were unable to recover it. 00:26:54.512 [2024-07-25 07:32:26.720635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.720662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.720862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.720905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.721030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.721055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.721186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.721212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.721418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.721461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.721616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.721658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.721831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.722021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.722046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.722212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.722237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.722449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.722493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.722661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.722705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.722914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.722956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.723091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.723115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.723248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.723274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.723424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.723648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.723690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.723874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.723917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.724041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.724066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.724219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.724249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.724425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.724469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.724643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.724685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.724864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.724910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.725072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.725097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.725252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.725278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.725449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.725491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.725667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.725710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.725914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.725958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.726121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.726148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.726344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.726387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.726530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.726573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.726754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.726799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.726928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.726953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.727107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.727134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.727260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.727286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.727459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.727502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.727660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.513 [2024-07-25 07:32:26.727707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.513 qpair failed and we were unable to recover it. 00:26:54.513 [2024-07-25 07:32:26.727869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.727912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.728039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.728065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.728250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.728277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.728459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.728487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.728664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.728707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.728852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.728894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.729050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.729075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.729221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.729251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.729424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.729466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.729684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.729881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.729924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.730111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.730136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.730275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.730300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.730454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.730498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.730680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.730895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.730937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.731091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.731116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.731236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.731268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.731445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.731487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.731632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.731659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.731824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.731869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.732002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.732027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.732156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.732180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.732346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.732390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.732536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.732564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.732739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.732765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.732924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.732948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.733094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.733119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.733269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.733295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.733472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.733518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.733725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.733768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.733940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.733965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.734090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.734117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.734309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.734338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.734510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.734554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.734725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.734768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.734899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.514 [2024-07-25 07:32:26.734925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.514 qpair failed and we were unable to recover it. 00:26:54.514 [2024-07-25 07:32:26.735076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.735101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.735229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.735265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.735429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.735458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.735602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.735628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.735783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.735808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.735937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.735962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.736091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.736115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.736237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.736271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.736416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.736441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.736595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.736620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.736827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.736869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.737029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.737053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.737181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.737207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.737364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.737407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.737553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.737595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.737781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.737824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.737952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.737979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.738106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.738131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.738267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.738294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.738444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.738488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.738670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.738718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.738844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.738868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.739009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.739034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.739163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.739188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.739365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.739407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.739589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.739635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.739810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.739852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.740013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.740038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.740170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.740195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.740354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.740397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.740569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.515 [2024-07-25 07:32:26.740613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.515 qpair failed and we were unable to recover it. 00:26:54.515 [2024-07-25 07:32:26.740782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.740825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.740980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.741004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.741155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.741179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.741357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.741402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.741576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.741624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.741780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.741823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.741976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.742001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.742182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.742206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.742359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.742402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.742554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.742596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.742771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.742799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.742964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.743011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.743162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.743187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.743333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.743378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.743586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.743628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.743824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.743867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.743998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.744022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.744176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.744201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.744355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.744398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.744568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.744611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.744795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.744838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.744988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.745013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.745167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.745193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.745352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.745395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.745581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.745608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.745811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.745852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.746009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.746035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.746187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.746213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.746410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.746455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.746611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.746653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.746827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.746870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.747035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.747060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.747238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.747272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.747444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.747488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.747705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.747747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.747892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.516 [2024-07-25 07:32:26.747936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.516 qpair failed and we were unable to recover it. 00:26:54.516 [2024-07-25 07:32:26.748068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.748093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.748307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.748349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.748559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.748602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.748809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.748852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.749004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.749029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.749186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.749210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.749396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.749440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.749598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.749626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.749786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.749957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.749981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.750137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.750162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.750309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.750353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.750516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.750542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.750730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.750773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.750925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.750951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.751101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.751132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.751263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.751289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.751427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.751452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.751597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.751640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.751789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.751831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.751952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.751977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.752128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.752154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.752363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.752408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.752612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.752640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.752808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.752833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.752986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.753011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.753187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.753212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.753372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.753415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.753593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.753635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.753841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.753883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.754060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.754084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.754252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.754288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.754439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.754484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.754632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.754674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.754882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.754925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.755054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.755079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.517 qpair failed and we were unable to recover it. 00:26:54.517 [2024-07-25 07:32:26.755237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.517 [2024-07-25 07:32:26.755272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.755450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.755494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.755699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.755742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.755913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.755956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.756107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.756132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.756288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.756318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.756514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.756731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.756773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.756909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.756952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.757080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.757104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.757223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.757254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.757393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.757419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.757544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.757569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.757747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.757771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.757944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.757970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.758116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.758142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.758283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.758309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.758488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.758536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.758725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.758752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.758911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.758942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.759098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.759124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.759291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.759321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.759460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.759486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.759631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.759656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.759828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.759871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.760055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.760080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.760231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.760262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.760431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.760458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.760603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.760630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.760834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.760878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.761059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.761085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.761212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.761238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.761426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.761470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.761652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.761700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.761877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.761919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.518 [2024-07-25 07:32:26.762075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.518 [2024-07-25 07:32:26.762111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.518 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.762264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.762295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.762469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.762512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.762694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.762738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.762921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.762967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.763097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.763123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.763276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.763302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.763445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.763488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.763659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.763703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.763821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.763845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.763978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.764003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.764193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.764219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.764416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.764444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.764624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.764666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.764829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.764855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.765030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.765056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.765217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.765248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.765412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.765455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.765630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.765672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.765878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.765921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.766106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.766131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.766300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.766329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.766557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.766601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.766788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.766831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.767006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.767053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.767233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.767267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.767441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.767483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.767634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.767681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.767848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.767891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.768049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.768074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.768310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.768355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.768523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.768565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.768742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.768785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.768992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.769035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.769213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.769237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.769453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.769496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.519 qpair failed and we were unable to recover it. 00:26:54.519 [2024-07-25 07:32:26.769656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.519 [2024-07-25 07:32:26.769699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.769878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.769921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.770077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.770101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.770254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.770296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.770498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.770541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.770697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.770729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.770926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.770970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.771101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.771128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.771307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.771350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.771529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.771554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.771731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.771774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.771926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.771951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.772131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.772155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.772310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.772335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.772491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.772517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.772697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.772727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.772896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.772921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.773080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.773104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.773309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.773336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.773488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.773516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.773709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.773750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.773924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.773948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.774097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.774295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.774324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.774518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.774565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.774774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.774817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.774972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.774997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.775154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.775181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.775355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.775600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.775753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.775795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.520 [2024-07-25 07:32:26.775950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.520 [2024-07-25 07:32:26.775975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.520 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.776139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.776164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.776337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.776379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.776524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.776568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.776772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.776815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.776935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.776959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.777113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.777137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.777264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.777290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.777473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.777515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.777663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.777705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.777856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.777898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.778028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.778053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.778206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.778231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.778413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.778456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.778631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.778676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.778848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.778891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.779043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.779069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.779258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.779284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.779440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.779466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.779610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.779652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.779859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.779888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.780060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.780085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.780237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.780269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.780480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.780508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.780703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.780746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.780944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.780995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.781291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.781319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.781451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.781476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.781641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.781670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.781819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.781846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.781985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.782012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.782178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.782204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.782368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.782394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.782572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.782615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.782764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.521 [2024-07-25 07:32:26.782807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.521 qpair failed and we were unable to recover it. 00:26:54.521 [2024-07-25 07:32:26.782984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.783028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.783182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.783207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.783365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.783408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.783609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.783637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.783864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.783908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.784065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.784092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.784310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.784352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.784552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.784579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.784741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.784787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.784941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.784984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.785139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.785165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.785339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.785381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.785553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.785595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.785749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.785794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.785920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.785946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.786076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.786100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.786316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.786346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.786546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.786572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.786724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.786749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.786928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.786952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.787108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.787133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.787287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.787313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.787527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.787555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.787741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.787784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.787967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.787992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.788143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.788167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.788314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.788357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.788538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.788580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.788757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.788800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.788977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.789002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.789157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.789187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.789318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.789345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.789558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.789600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.789746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.789788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.789926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.789951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.522 [2024-07-25 07:32:26.790136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.522 [2024-07-25 07:32:26.790161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.522 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.790331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.790374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.790526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.790554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.790767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.790794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.790962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.790986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.791146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.791172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.791352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.791396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.791570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.791613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.791784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.791825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.791965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.791989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.792170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.792203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.792392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.792435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.792639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.792681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.792864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.792909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.793067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.793093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.793238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.793288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.793475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.793520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.793722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.793765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.793970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.794013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.794170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.794194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.794393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.794437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.794612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.794654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.794829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.794872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.795030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.795056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.795256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.795283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.795455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.795498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.795649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.795692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.795832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.795874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.796049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.796077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.796292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.796337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.796483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.796525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.796706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.796747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.796918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.796961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.797112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.797137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.797338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.797382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.797554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.797587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.523 [2024-07-25 07:32:26.797750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.523 [2024-07-25 07:32:26.797791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.523 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.797940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.797964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.798090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.798116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.798282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.798308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.798463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.798488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.798669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.798695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.798855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.798880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.799036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.799062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.799212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.799238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.799429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.799455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.799628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.799670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.799841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.799884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.800048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.800073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.800206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.800232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.800407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.800450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.800623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.800666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.800877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.800919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.801046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.801071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.801253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.801279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.801486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.801529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.801687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.801727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.801876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.801920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.802079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.802105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.802265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.802291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.802445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.802489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.802702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.802744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.802929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.802974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.803151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.803176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.803315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.803343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.803517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.803560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.803769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.803811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.804013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.804042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.804237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.804271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.804425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.804451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.804635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.804679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.804860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.804907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.524 qpair failed and we were unable to recover it. 00:26:54.524 [2024-07-25 07:32:26.805061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.524 [2024-07-25 07:32:26.805087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.805250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.805276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.805453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.805495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.805672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.805722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.805897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.805940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.806121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.806147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.806321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.806364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.806538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.806583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.806790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.806818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.806959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.806984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.807144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.807169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.807346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.807375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.807563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.807605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.807786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.807827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.808003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.808029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.808209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.808234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.808449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.808477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.808670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.808716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.808892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.808937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.809122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.809147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.809311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.809354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.809557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.809600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.809779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.809823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.810020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.810064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.810208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.810233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.810446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.810474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.810662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.810705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.810884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.810927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.811051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.811076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.811224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.811283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.811497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.811539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.811754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.525 [2024-07-25 07:32:26.811795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.525 qpair failed and we were unable to recover it. 00:26:54.525 [2024-07-25 07:32:26.811967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.812010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.812170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.812195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.812403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.812447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.812651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.812695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.812868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.812910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.813036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.813062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.813208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.813233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.813390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.813434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.813639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.813683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.813886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.813929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.814109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.814134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.814311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.814361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.814577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.814620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.814796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.814839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.814991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.815016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.815168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.815195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.815410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.815453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.815656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.815699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.815879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.815922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.816100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.816126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.816299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.816328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.816491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.816534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.816711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.816756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.816937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.816962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.817087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.817113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.817268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.817503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.817547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.817747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.817776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.817939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.817964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.818142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.818167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.818342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.818386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.818540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.818582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.818726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.818770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.818950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.818976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.819132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.819159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.819336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.819379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.526 qpair failed and we were unable to recover it. 00:26:54.526 [2024-07-25 07:32:26.819582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.526 [2024-07-25 07:32:26.819625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.819803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.819851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.820039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.820064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.820223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.820260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.820444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.820487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.820696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.820738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.820891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.820932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.821082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.821107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.821266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.821292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.821434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.821479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.821645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.821687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.821893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.821935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.822062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.822088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.822212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.822237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.822430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.822473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.822655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.822687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.822905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.822947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.823115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.823140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.823295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.823323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.823541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.823584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.823754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.823799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.823967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.824010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.824190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.824215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.824399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.824441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.824618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.824661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.824801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.824847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.825028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.825053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.825211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.825236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.825394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.825438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.825625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.825668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.825853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.825897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.826047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.826072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.826229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.826260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.826438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.826483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.826654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.826697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.826901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.826929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.827066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.527 [2024-07-25 07:32:26.827093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.527 qpair failed and we were unable to recover it. 00:26:54.527 [2024-07-25 07:32:26.827265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.827308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.827521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.827562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.827705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.827747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.827920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.827962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.828118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.828143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.828322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.828366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.828520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.828546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.828724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.828766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.828968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.829012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.829191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.829216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.829368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.829396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.829612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.829640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.829838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.829887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.830041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.830066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.830250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.830277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.830445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.830472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.830630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.830672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.830846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.830889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.831093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.831140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.831345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.831389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.831534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.831577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.831757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.831799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.832013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.832057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.832210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.832235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.832420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.832448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.832606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.832635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.832802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.832826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.832982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.833008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.833193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.833218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.833362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.833405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.833557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.833600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.833807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.833848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.834010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.834034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.834159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.834186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.834361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.834391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.834616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.834644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.834863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.528 [2024-07-25 07:32:26.834905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.528 qpair failed and we were unable to recover it. 00:26:54.528 [2024-07-25 07:32:26.835090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.835114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.835268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.835294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.835477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.835521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.835704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.835747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.835925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.835968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.836152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.836177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.836355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.836398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.836607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.836649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.836836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.836878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.837040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.837067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.837202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.837228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.837399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.837442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.837601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.837643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.837847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.837890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.838053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.838079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.838261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.838287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.838464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.838507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.838659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.838702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.838908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.838950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.839081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.839105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.839259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.839285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.839492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.839537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.839682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.839728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.839928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.839970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.840131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.840156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.840325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.840368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.840555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.840598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.840805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.840848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.840981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.841005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.841183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.841209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.841369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.841413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.841618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.841661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.841838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.841886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.842044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.842071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.842225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.842257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.842439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.842468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.842660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.842704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.842866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.842908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.529 [2024-07-25 07:32:26.843061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.529 [2024-07-25 07:32:26.843087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.529 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.843254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.843280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.843454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.843704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.843746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.843916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.843959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.844110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.844136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.844314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.844358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.844529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.844572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.844745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.844787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.844917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.844944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.845107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.845134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.845304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.845332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.845514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.845556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.845705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.845749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.845925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.845951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.846079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.846104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.846226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.846259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.846423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.846449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.846622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.846664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.846790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.846816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.846996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.847021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.847195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.847221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.847426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.847464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.847669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.847705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.847868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.847898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.848079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.848105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.848262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.848307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.848510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.848538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.848914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.848967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.849162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.849187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.849369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.849398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.849610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.849638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.849817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.849845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.850016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.850041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.850195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.530 [2024-07-25 07:32:26.850221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.530 qpair failed and we were unable to recover it. 00:26:54.530 [2024-07-25 07:32:26.850408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.850437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.850605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.850633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.850951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.851010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.851219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.851250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.851439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.851467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.851737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.851788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.851961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.851989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.852186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.852211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.852399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.852428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.852644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.852874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.852902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.853100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.853125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.853258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.853301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.853513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.853543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.853893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.853951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.854147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.854173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.854346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.854375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.854747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.854794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.854992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.855020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.855191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.855216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.855409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.855435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.855626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.855651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.855808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.855834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.855964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.855989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.856169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.856195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.856356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.856383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.856516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.856542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.856724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.856749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.856899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.856929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.857088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.857115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.857308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.857337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.857539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.857567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.857768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.857793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.857943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.857968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.858131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.858156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.858329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.858358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.858538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.858564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.858767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.858792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.858920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.858946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.859099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.531 [2024-07-25 07:32:26.859124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.531 qpair failed and we were unable to recover it. 00:26:54.531 [2024-07-25 07:32:26.859296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.859325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.859504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.859530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.859698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.859723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.859850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.859875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.860064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.860089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.860246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.860288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.860443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.860471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.860737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.860786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.860996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.861022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.861143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.861168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.861316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.861344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.861528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.861555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.861699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.861727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.861928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.861953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.862104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.862129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.862305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.862334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.862573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.862629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.862848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.862876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.863046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.863071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.863254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.863280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.863403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.863428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.863579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.863604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.863761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.863787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.863936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.863970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.864136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.864164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.864351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.864377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.864529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.864554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.864732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.864757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.864910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.864940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.865118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.865143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.865299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.865325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.865478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.865504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.865628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.865653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.865815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.865840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.865965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.865991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.866169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.866197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.866350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.866375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.866533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.866558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.866710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.866735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.532 [2024-07-25 07:32:26.866891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.532 [2024-07-25 07:32:26.866917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.532 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.867072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.867097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.867252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.867278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.867416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.867441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.867572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.867596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.867745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.867770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.867954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.867980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.868104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.868129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.868323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.868359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.868516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.868542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.868665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.868689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.868809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.868834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.868987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.869013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.869144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.869169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.869328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.869354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.869503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.869528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.869702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.869735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.869919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.869964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.870154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.870180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.870339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.870366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.870540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.870583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.870754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.870797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.870968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.870996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.871162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.871187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.871325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.871369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.871575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.871618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.871822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.871864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.871987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.872012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.872194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.872220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.872405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.872455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.872596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.872639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.872818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.872865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.873024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.873048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.873173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.873199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.873349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.873392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.873562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.873605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.873773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.873815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.873962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.873990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.874164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.874189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.874388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.533 [2024-07-25 07:32:26.874432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.533 qpair failed and we were unable to recover it. 00:26:54.533 [2024-07-25 07:32:26.874615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.874658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.874799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.874841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.874963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.874988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.875171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.875196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.875379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.875422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.875627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.875669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.875934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.875987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.876170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.876196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.876345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.876388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.876606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.876650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.876830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.876858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.877032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.877057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.877209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.877234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.877414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.877458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.877658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.877686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.877824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.877851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.878038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.878081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.878261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.878287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.878433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.878479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.878680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.878724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.878872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.878915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.879100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.879126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.879290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.879318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.879533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.879575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.879868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.879923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.880076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.880101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.880234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.880266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.880450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.880492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.880667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.880709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.880852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.880899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.881059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.881085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.881267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.881293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.881495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.881538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.881682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.881725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.534 [2024-07-25 07:32:26.881867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.534 [2024-07-25 07:32:26.881911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.534 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.882065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.882090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.882249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.882275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.882456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.882499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.882678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.882719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.882908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.882953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.883115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.883140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.883311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.883357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.883536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.883579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.883763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.883807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.883985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.884031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.884189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.884216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.884405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.884450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.884621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.884663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.884834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.884876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.885028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.885053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.885233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.885263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.885447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.885489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.885669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.885717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.885916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.885958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.886140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.886165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.886316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.886342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.886574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.886616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.886770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.886807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.887016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.887045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.887193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.887218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.887381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.887408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.887587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.887617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.887791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.887819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.887964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.887990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.888141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.888166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.888377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.888407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.888607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.888635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.888769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.888796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.888969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.888994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.889119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.889145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.889335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.889364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.889674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.889736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.889912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.889937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.890114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.890140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.890265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.890310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.535 [2024-07-25 07:32:26.890521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.535 [2024-07-25 07:32:26.890549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.535 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.890779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.890807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.890976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.891001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.891178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.891203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.891388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.891417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.891700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.891750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.891926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.891951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.892104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.892129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.892321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.892349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.892514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.892543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.892743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.892768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.892917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.892942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.893094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.893120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.893253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.893295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.893499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.893524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.893673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.893698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.893817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.893842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.894024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.894049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.894204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.894230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.894418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.894446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.894715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.894959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.894988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.895167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.895192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.895408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.895437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.895604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.895632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.895816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.895844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.896023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.896048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.896173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.896198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.896384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.896413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.896699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.896750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.896995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.897045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.897230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.897260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.897448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.897477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.897682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.897709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.897891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.897919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.898123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.898148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.898347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.898376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.898521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.898548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.898795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.898840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.899034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.899059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.899216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.899250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.899409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.899434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.536 [2024-07-25 07:32:26.899562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.536 [2024-07-25 07:32:26.899586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.536 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.899762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.899787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.899919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.899945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.900147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.900175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.900336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.900363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.900524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.900550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.900714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.900739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.900917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.900942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.901069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.901094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.901250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.901276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.901430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.901455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.901614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.901639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.901820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.901844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.902002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.902027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.902225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.902259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.902424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.902449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.902581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.902606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.902763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.902788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.902939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.902965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.903121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.903151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.903341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.903367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.903521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.903547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.903701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.903725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.903915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.903940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.904106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.904134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.904327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.904352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.904507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.904533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.904684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.904710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.904862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.904887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.905036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.905061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.905211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.905236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.905401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.905426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.905612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.905637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.905794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.905819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.905941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.905968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.906151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.906180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.906365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.906392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.906542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.906567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.906726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.906751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.906878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.906903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.907054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.907079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.907231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.907274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.907456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.907481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.907640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.907665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.907846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.537 [2024-07-25 07:32:26.907871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.537 qpair failed and we were unable to recover it. 00:26:54.537 [2024-07-25 07:32:26.908020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.908045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.908185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.908213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.908401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.908427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.908550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.908576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.908699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.908725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.908878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.908904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.909083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.909108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.909263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.909307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.909466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.909491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.909617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.909643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.909771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.909796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.909987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.910013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.910161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.910186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.910339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.910365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.910519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.910548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.910703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.910728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.910904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.910929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.911080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.911105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.911300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.911327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.911477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.911502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.911634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.911660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.911787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.911813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.911966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.911992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.912111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.912136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.912274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.912301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.912455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.912481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.912633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.912659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.912838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.912863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.913021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.913046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.913199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.913225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.913409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.913435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.913595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.913621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.913774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.913799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.913956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.913982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.914129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.914158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.914306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.914332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.914490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.914516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.914690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.914716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.914876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.914901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.915028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.538 [2024-07-25 07:32:26.915054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.538 qpair failed and we were unable to recover it. 00:26:54.538 [2024-07-25 07:32:26.915238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.915275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.915470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.915495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.915663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.915688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.915844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.915869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.915992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.916018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.916182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.916210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.916393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.916419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.916578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.916603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.916726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.916752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.916912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.916938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.917095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.917120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.917246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.917272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.917429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.917454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.917617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.917643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.917770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.917800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.917961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.917986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.918145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.918173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.918353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.918378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.918534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.918559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.918685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.918710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.918886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.918911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.919092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.919118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.919273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.919304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.919455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.919481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.919669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.919694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.919847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.919872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.920027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.920051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.920254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.920283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.920436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.920462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.920616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.920641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.920770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.920795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.920918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.920944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.921127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.921152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.921308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.921334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.921496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.921521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.921674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.921699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.921823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.921848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.922001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.922026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.922169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.922198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.922358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.922533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.922558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.922740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.922766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.922915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.922941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.539 [2024-07-25 07:32:26.923092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.539 [2024-07-25 07:32:26.923120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.539 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.923286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.923313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.923462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.923487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.923653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.923678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.923833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.923858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.923979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.924004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.924158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.924183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.924333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.924359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.924524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.924550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.924734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.924759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.924913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.924938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.925082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.925115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.925267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.925298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.925459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.925484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.925634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.925659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.925787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.925814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.926004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.926029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.926204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.926229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.926379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.926405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.926558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.926584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.926735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.926760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.926938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.926964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.927115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.927141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.927304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.927330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.927458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.927484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.927656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.927682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.927859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.927884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.928060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.928088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.928298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.928324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.928498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.928524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.928706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.928731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.928915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.928939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.929084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.929109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.929296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.929322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.929449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.929474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.929591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.929616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.929752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.929777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.929955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.929980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.930134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.930159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.930292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.930317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.930435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.930460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.930613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.930638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.930761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.930786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.930910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.930935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.931086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.931112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.540 [2024-07-25 07:32:26.931264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.540 [2024-07-25 07:32:26.931301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.540 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.931453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.931479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.931663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.931688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.931812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.931838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.932018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.932043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.932180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.932207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.932391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.932421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.932544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.932569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.932706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.932732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.932914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.932939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.933094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.933121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.933311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.933338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.933465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.933491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.933670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.933695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.933850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.933876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.934026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.934051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.934209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.934233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.934356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.934381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.934537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.934562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.934743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.934768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.934939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.934965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.935097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.935122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.935306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.935331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.935477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.935502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.935634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.935659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.935837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.935862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.936038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.936063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.936249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.936275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.936435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.936460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.936615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.936640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.936794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.936819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.936973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.936998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.937150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.937175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.937313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.937346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.937558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.937602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.937780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.937824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.938002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.938046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.938224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.938254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.938380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.938406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.938618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.938646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.938862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.938904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.939055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.939080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.939246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.939272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.939453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.939478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.939651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.939678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.541 [2024-07-25 07:32:26.939836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.541 [2024-07-25 07:32:26.939879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.541 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.940081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.940128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.940291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.940320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.940518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.940561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.940736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.940780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.940929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.940972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.941104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.941129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.941286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.941312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.941479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.941507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.941697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.941742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.941917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.941960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.942115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.942141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.942318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.942362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.942533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.942574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.942746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.942788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.942952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.942978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.943102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.943127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.943289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.943318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.943490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.943534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.943708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.943751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.943907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.943933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.944119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.944144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.944341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.944387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.944534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.944577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.944833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.944883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.945036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.945062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.945185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.945210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.945378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.945403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.945570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.945608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.945777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.945803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.945992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.946018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.946196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.946221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.946412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.946437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.946647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.946676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.946971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.947023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.947196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.947223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.947411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.947437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.947618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.947643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.947844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.947871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.542 [2024-07-25 07:32:26.948067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.542 [2024-07-25 07:32:26.948094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.542 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.948248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.948273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.948431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.948456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.948735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.948784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.948989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.949016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.949178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.949206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.949383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.949408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.949564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.949588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.949711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.949736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.949856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.949881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.950061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.950088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.950258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.950302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.950426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.950452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.950610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.950636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.950828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.950856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.951033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.951060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.951237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.951271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.951391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.951415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.951615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.951679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.951876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.951908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.952078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.952108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.952254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.952299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.952458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.952482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.952650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.952677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.952851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.952879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.953097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.953123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.953270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.953312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.953469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.953493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.953650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.953675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.953820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.953848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.953993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.954020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.954161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.954189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.954389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.954415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.954584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.954611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.954781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.954808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.954949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.954978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.955216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.955256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.955412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.955438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.955591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.955619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.955813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.955840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.955995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.956021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.956198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.956239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.956421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.956446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.956653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.956682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.956844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.956868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.957017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.957042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.957234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.957265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.957413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.957437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.543 [2024-07-25 07:32:26.957598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.543 [2024-07-25 07:32:26.957622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.543 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.957824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.957998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.958026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.958193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.958221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.958399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.958424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.958573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.958598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.958797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.958825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.959017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.959045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.959189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.959212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.959380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.959404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.959584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.959612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.959891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.959944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.960118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.960143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.960317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.960347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.960519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.960547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.960713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.960739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.960888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.960913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.961045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.961085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.961289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.961318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.961458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.961485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.961621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.961646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.961817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.961845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.962043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.962076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.962248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.962277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.962449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.962474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.962647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.962674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.962871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.962898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.963035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.963063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.963257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.963303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.963434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.963460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.963605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.963633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.963800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.963827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.963974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.963999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.964146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.964171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.964402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.964428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.964550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.964577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.964759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.964784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.964956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.964984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.965152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.965180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.965321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.965349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.965505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.965530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.965683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.965708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.965897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.965922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.966070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.966111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.966309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.966334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.966476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.966505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.966645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.966673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.966839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.966866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.544 [2024-07-25 07:32:26.967047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.544 [2024-07-25 07:32:26.967072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.544 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.967235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.967270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.967413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.967440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.967569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.967596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.967763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.967789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.967939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.967982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.968158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.968183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.968379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.968407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.968546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.968571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.968723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.968765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.968942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.968969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.969098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.969126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.969308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.969333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.969531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.969559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.969765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.969793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.969964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.969992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.970139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.970164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.970288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.970314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.970466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.970495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.970688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.970716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.970891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.970916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.971088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.971115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.971287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.971315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.971485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.971513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.971680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.971705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.971831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.971872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.972045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.972073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.972207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.972236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.972438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.972469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.972601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.972643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.972815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.972840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.973033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.973061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.973258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.973303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.973459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.973484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.973643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.973667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.973868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.973896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.974067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.974092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.974218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.974248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.974403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.974428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.974597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.974625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.974800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.974824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.974957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.974997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.975169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.975202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.975374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.975403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.975579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.975605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.975774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.975801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.976008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.976033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.976188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.976213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.976338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.976363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.976490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.976531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.976727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.976755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.976888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.545 [2024-07-25 07:32:26.976915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.545 qpair failed and we were unable to recover it. 00:26:54.545 [2024-07-25 07:32:26.977090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.977117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.977270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.977295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.977484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.977512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.977683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.977711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.977864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.977888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.978034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.978058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.978266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.978294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.978428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.978457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.978659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.978684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.978861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.978889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.979070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.979095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.979253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.979279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.979469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.979494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.979656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.979851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.979879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.980081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.980106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.980232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.980263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.980388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.980417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.980601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.980629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.980798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.980825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.981013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.981037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.981206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.981234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.981391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.981417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.981612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.981640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.981776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.981801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.981957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.981999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.982160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.982187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.982363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.982389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.982534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.982559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.982752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.982780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.982944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.982971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.983140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.983167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.983347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.983373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.983528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.983553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.983701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.983726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.983863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.983888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.984063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.984088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.984258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.984286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.984472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.984499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.984661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.984686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.984834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.546 [2024-07-25 07:32:26.984858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.546 qpair failed and we were unable to recover it. 00:26:54.546 [2024-07-25 07:32:26.984985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.985010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.985173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.985197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.985354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.985383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.985558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.985587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.985716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.985758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.985901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.985930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.986069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.986096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.986239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.986271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.986402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.986428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.986611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.986640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.986786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.986815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.986962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.986987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.987138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.987162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.987314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.987340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.987531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.987558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.987709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.987734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.987882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.987907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.988095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.988131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.988305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.988332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.988460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.988485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.988637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.988662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.988814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.988839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.988981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.989007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.989145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.989173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.989318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.989345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.989499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.989525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.989685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.989710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.989892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.989917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.990094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.990119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.990275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.990300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.990453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.990484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.990667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.990692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.990815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.990841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.991020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.991045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.991232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.991283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.991404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.991429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.991582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.991608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.991764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.991789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.991940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.991966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.992124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.992149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.992296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.992322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.992475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.992502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.992620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.992645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.992802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.992828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.992988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.993014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.993199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.993227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.993426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.993452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.993608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.993635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.993819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.993844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.993998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.547 [2024-07-25 07:32:26.994025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.547 qpair failed and we were unable to recover it. 00:26:54.547 [2024-07-25 07:32:26.994205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.994230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.994385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.994411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.994544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.994570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.994718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.994743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.994903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.994928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.995108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.995137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.995308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.995334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.995482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.995507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.995684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.995710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.995837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.995862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.996009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.996034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.996186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.996212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.996395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.996421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.996580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.996607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.996754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.996779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.996897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.996923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.997123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.997151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.997323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.997348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.997499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.997524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.997652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.997677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.997831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.997860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.998022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.998048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.998205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.998230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.998388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.998413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.998565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.998590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.998750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.998775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.998961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.998986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.999136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.999164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.999364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.999389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.999538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.999563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.999717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.999743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:26.999887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:26.999912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.000069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.000251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.000277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.000431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.000456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.000588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.000613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.000735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.000760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.000940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.000965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.001170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.001197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.001336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.001361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.001518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.001544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.001693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.001718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.001839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.001863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.002041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.002066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.002237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.002285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.002439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.002464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.002612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.002638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.002768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.002794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.002954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.002979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.003101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.003126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.548 [2024-07-25 07:32:27.003277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.548 [2024-07-25 07:32:27.003303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.548 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.003453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.003478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.003656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.003681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.003833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.003857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.004034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.004058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.004258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.004301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.004431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.004456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.004574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.004599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.004721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.004746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.004863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.004888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.005044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.005072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.005226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.005256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.005436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.005461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.005612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.005637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.005787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.005813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.005974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.006000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.006165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.006193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.006339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.006366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.006515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.006541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.006721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.006746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.006904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.006929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.007110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.007135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.007296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.007322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.007453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.007478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.007639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.007664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.007788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.007813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.007960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.007985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.008158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.008186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.008392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.008417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.008563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.008588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.008767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.008792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.008950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.008976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.009122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.009147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.009282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.009307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.009427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.009453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.009616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.009641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.009763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.009788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.009948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.009975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.010122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.010150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.010323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.010349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.010512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.010538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.549 [2024-07-25 07:32:27.010716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.549 [2024-07-25 07:32:27.010741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.549 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.010888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.010914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.011048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.011073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.011192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.011217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.011378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.011404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.011532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.011558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.011697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.011723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.011872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.011897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.012027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.012054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.012289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.012320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.012477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.012509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.012641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.012666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.012821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.012847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.012891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37230 (9): Bad file descriptor 00:26:54.834 [2024-07-25 07:32:27.013060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.013099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.013247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.013276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.013406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.013432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.013591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.013618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.013797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.013840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.013990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.014034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.014214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.014239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.014403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.014430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.014585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.014610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.014785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.014820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.015041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.015085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.015228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.015260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.015391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.015417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.015563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.015607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.015774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.015802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.015997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.016026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.016173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.016199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.016338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.016365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.016534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.016577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.016755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.016801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.016969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.017012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.017133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.017159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.017307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.017352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.017494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.017523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.017717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.017938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.017983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.018117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.018148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.018356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.018387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.018525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.018553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.018710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.018737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.018884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.018914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.019075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.019103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.019298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.019324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.019478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.019518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.019653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.019681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.019873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.019901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.020040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.020073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.020237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.020271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.020472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.020500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.020667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.020694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.020900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.020943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.021123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.021148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.021306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.021332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.021498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.021541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.834 qpair failed and we were unable to recover it. 00:26:54.834 [2024-07-25 07:32:27.021684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.834 [2024-07-25 07:32:27.021726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.021938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.021981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.022096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.022122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.022287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.022316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.022515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.022558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.022729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.022772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.022979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.023023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.023176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.023201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.023368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.023413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.023598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.023641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.023859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.023901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.024060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.024087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.024272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.024298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.024479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.024523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.024703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.024746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.025002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.025029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.025190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.025216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.025370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.025413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.025625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.025668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.025892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.025935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.026128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.026153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.026329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.026373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.026586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.026629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.026770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.026812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.027014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.027056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.027238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.027270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.027426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.027452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.027627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.027655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.027845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.027889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.028030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.028058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.028257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.028283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.028494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.028522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.028711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.028758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.028909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.028951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.029132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.029157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.029336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.029380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.029541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.029568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.029738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.029781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.029945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.029989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.030142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.030167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.030336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.030380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.030560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.030605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.030786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.030832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.031016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.031042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.031193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.031218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.031441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.031484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.031678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.031883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.031911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.032288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.032500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.032541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.032739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.032767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.032999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.033027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.033231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.033266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.033406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.033431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.033588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.033614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.033764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.033789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.033942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.033967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.034119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.034146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.034352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.034378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.034512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.034555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.034698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.034723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.034867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.034895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.035061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.035089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.035260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.035285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.035410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.035436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.035639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.035667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.035860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.035888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.036087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.036115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.036307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.036333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.036455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.036480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.036677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.036705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.036882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.835 [2024-07-25 07:32:27.036909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.835 qpair failed and we were unable to recover it. 00:26:54.835 [2024-07-25 07:32:27.037071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.037100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.037291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.037317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.037469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.037493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.037708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.037733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.037886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.037916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.038088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.038116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.038331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.038357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.038506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.038547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.038715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.038743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.038933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.038960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.039154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.039182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.039362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.039388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.039538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.039563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.039713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.039738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.039893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.039925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.040152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.040180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.040390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.040415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.040614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.040642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.040786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.040982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.041011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.041178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.041207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.041394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.041420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.041572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.041597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.041768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.041796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.041931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.041959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.042094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.042122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.042301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.042327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.042476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.042502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.042650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.042680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.042880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.042908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.043065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.043092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.043261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.043303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.043432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.043457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.043601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.043626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.043776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.043801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.043949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.043976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.044140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.044167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.044374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.044400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.044597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.044624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.044824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.044851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.044992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.045020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.045189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.045221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.045404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.045429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.045579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.045604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.045730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.045755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.045932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.045960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.046133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.046160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.046335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.046361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.046537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.046563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.046765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.046793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.046958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.046985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.047213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.047246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.047427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.047452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.047618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.047645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.047791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.047815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.047974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.047999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.048126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.048151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.048340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.836 [2024-07-25 07:32:27.048366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.836 qpair failed and we were unable to recover it. 00:26:54.836 [2024-07-25 07:32:27.048520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.048561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.048726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.048754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.048922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.048947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.049117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.049145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.049338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.049366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.049529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.049553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.049693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.049721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.049920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.049945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.050129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.050153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.050301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.050330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.050526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.050554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.050699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.050724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.050881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.050923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.051085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.051113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.051283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.051308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.051481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.051509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.051699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.051726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.051876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.051901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.052061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.052086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.052236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.052283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.052428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.052453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.052582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.052623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.052757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.052784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.052934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.052959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.053130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.053158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.053317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.053346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.053550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.053575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.053749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.053776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.053936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.054133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.054158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.054324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.054352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.054514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.054542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.054736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.054761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.054933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.054961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.055097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.055124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.055281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.055307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.055485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.055509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.055681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.055709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.055911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.055936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.056062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.056087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.056215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.056240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.056405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.056430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.056611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.056636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.056816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.056844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.056978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.057003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.057153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.057178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.057376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.057401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.057553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.057730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.057754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.057943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.057985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.058145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.058171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.058383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.058418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.058560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.058589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.058765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.058791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.058964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.058991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.059191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.059219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.059417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.059442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.059573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.059615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.059784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.059812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.059982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.060172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.060200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.060376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.060401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.060579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.060604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.060774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.060801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.060942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.060969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.061143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.061167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.061370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.061399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.061561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.061588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.061749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.061773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.061907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.061933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.837 [2024-07-25 07:32:27.062058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.837 [2024-07-25 07:32:27.062085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.837 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.062240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.062272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.062387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.062426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.062583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.062611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.062762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.062786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.062937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.062961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.063123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.063148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.063268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.063468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.063501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.063677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.063705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.063872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.063896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.064050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.064074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.064254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.064283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.064449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.064474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.064633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.064658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.064812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.064836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.065013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.065039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.065254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.065283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.065446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.065474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.065642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.065667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.065784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.065809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.065997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.066025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.066203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.066229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.066370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.066413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.066589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.066614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.066792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.066817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.066988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.067015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.067220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.067251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.067406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.067431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.067575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.067603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.067741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.067951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.067976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.068150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.068179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.068321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.068351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.068499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.068524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.068676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.068724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.068895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.068923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.069084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.069109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.069308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.069337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.069503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.069530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.069703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.069728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.069853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.069878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.070065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.070092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.070233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.070264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.070392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.070417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.070568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.070592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.070770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.070794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.070993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.071021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.071180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.071209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.071393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.071420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.071541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.071566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.071727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.071754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.071929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.071954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.072125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.072153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.072361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.072387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.072507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.072533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.072694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.072719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.072881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.072921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.073066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.073091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.073252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.073277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.073405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.073430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.073591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.073616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.073741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.073766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.073970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.073998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.074180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.074206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.074393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.074419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.074566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.074594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.074739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.074764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.074922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.838 [2024-07-25 07:32:27.074963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.838 qpair failed and we were unable to recover it. 00:26:54.838 [2024-07-25 07:32:27.075166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.075194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.075358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.075384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.075559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.075589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.075783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.075811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.075957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.075983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.076138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.076162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.076312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.076506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.076531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.076684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.076708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.076895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.076920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.077073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.077098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.077252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.077277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.077452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.077479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.077634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.077658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.077782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.077806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.078007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.078035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.078182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.078208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.078442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.078498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.078698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.078725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.078896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.078921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.079086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.079114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.079287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.079316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.079489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.079514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.079713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.079742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.079907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.079935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.080078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.080103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.080257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.080299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.080485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.080511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.080682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.080707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.080888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.080915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.081108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.081135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.081281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.081306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.081465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.081489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.081684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.081711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.081911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.081940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.082081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.082108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.082291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.082320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.082468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.082493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.082671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.082696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.082869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.082896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.083068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.083094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.083277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.083320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.083491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.083521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.083694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.083719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.083852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.083877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.083999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.084024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.084203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.084228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.084429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.084457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.084624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.084652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.084857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.084882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.085052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.085076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.085226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.085277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.085456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.839 [2024-07-25 07:32:27.085481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.839 qpair failed and we were unable to recover it. 00:26:54.839 [2024-07-25 07:32:27.085666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.085694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.085863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.085891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.086064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.086088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.086214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.086248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.086423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.086449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.086612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.086637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.086838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.086867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.087016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.087045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.087216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.087254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.087453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.087479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.087648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.087676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.087822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.087847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.087999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.088024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.088202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.088231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.088405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.088430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.088585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.088610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.088756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.088781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.088902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.088927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.089094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.089122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.089298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.089325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.089479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.089504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.089658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.089683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.089830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.089857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.090005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.090029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.090180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.090222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.090404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.090429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.090577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.090602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.090773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.090801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.090936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.090964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.091135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.091161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.091327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.091356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.091515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.091543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.091713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.091738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.091886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.091928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.092096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.092124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.092297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.092323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.092476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.092518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.092689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.092717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.092871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.092896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.093087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.093112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.093327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.093356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.093560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.093585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.093765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.093792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.093980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.094008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.094145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.094170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.094365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.094394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.094533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.094561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.094729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.094754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.094962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.094990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.095158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.095185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.095339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.095365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.095517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.095542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.095721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.095748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.095914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.095939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.096108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.096135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.096262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.096291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.096465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.096490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.096652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.096680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.096823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.096851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.097005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.097046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.097182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.097209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.097382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.097408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.097535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.097559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.097766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.097794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.840 qpair failed and we were unable to recover it. 00:26:54.840 [2024-07-25 07:32:27.097957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.840 [2024-07-25 07:32:27.097985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.098153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.098178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.098307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.098333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.098459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.098484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.098706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.098733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.098979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.099032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.099230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.099268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.099475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.099500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.099764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.099817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.099988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.100016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.100185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.100210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.100345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.100370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.100569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.100601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.100777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.100801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.100923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.100967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.101160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.101188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.101355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.101380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.101579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.101607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.101778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.101803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.101978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.102003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.102211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.102239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.102417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.102444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.102595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.102620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.102795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.102823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.102961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.102989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.103162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.103187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.103353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.103379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.103542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.103568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.103715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.103740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.103916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.103944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.104107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.104135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.104332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.104358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.104509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.104552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.104691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.104718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.104891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.104916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.105073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.105219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.105251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.105381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.105406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.105535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.105561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.105742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.105775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.105955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.105980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.106191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.106219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.106428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.106453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.106572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.106598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.106796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.106824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.107020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.107048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.107256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.107282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.107417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.107442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.107562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.107587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.107736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.107761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.107882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.107907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.108088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.108113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.108270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.108297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.108453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.108478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.108632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.108661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.108836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.108861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.109067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.109094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.109308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.109334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.109468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.109493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.109661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.109689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.109856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.109883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.110070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.110098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.841 [2024-07-25 07:32:27.110308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.841 [2024-07-25 07:32:27.110333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.841 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.110492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.110532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.110694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.110719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.110877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.110921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.111088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.111123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.111274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.111300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.111456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.111481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.111657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.111683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.111835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.111860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.112046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.112071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.112264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.112293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.112457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.112482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.112647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.112675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.112807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.112835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.113001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.113026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.113201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.113228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.113408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.113434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.113588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.113613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.113789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.113817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.113986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.114013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.114159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.114184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.114335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.114361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.114536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.114561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.114683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.114709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.114864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.114890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.115089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.115117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.115318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.115344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.115520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.115547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.115689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.115717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.115895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.115920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.116073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.116097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.116269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.116312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.116462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.116487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.116687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.116714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.116883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.116911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.117080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.117105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.117259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.117303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.117472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.117500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.117694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.117719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.117891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.117919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.118067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.118095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.118263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.118291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.118432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.118457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.118606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.118630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.118757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.118782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.118959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.118987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.119147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.119175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.119342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.119368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.119538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.119566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.119733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.119758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.119934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.119959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.120141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.120169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.120310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.120340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.120488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.120513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.120641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.120682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.120845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.120873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.121075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.121100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.121272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.121302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.842 [2024-07-25 07:32:27.121471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.842 [2024-07-25 07:32:27.121499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.842 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.121685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.121710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.121878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.121906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.122101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.122128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.122277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.122303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.122459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.122484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.122687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.122715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.122875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.122900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.123074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.123102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.123272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.123300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.123474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.123499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.123679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.123721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.123912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.123940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.124103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.124128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.124302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.124335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.124506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.124535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.124704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.124729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.124930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.124958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.125127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.125155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.125355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.125382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.125555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.125583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.125727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.125757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.125936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.125961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.126142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.126167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.126342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.126370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.126526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.126551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.126729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.126754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.126954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.126982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.127127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.127152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.127308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.127352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.127512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.127540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.127712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.127736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.127889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.127914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.128120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.128148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.128315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.128340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.128524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.128566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.128728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.128756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.128936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.128961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.129148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.129176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.129315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.129341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.129491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.129516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.129690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.129722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.129915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.129942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.130116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.130142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.130313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.130341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.130513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.130540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.130716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.130742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.130915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.130944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.131141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.131168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.131308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.131335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.131541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.131569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.131736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.131763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.131906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.131931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.132131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.132158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.132299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.132327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.132497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.132522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.132672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.132714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.132908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.132936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.133073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.133099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.133289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.133317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.133458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.133487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.133687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.133712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.133879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.133906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.134062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.134090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.134255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.134280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.134440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.134465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.134645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.134686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.134837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.134861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.135012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.135037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.135269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.843 [2024-07-25 07:32:27.135312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.843 qpair failed and we were unable to recover it. 00:26:54.843 [2024-07-25 07:32:27.135449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.135475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.135654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.135682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.135824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.135853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.135988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.136013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.136171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.136195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.136367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.136395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.136572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.136596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.136768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.136795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.136962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.136989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.137141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.137167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.137298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.137324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.137479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.137503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.137628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.137653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.137805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.137830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.138033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.138059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.138178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.138203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.138344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.138386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.138526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.138554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.138701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.138725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.138848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.138872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.139045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.139073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.139216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.139249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.139375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.139400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.139537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.139564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.139736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.139760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.139963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.139990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.140160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.140188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.140375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.140400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.140558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.140583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.140728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.140771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.140917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.140941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.141098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.141141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.141304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.141332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.141507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.141543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.141726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.141754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.141915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.141942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.142088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.142113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.142237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.142269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.142458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.142485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.142655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.142684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.142818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.142860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.143029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.143058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.143203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.143228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.143421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.143450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.143615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.143643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.143800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.143825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.144021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.144049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.144183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.144211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.144413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.144609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.144637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.144798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.144825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.144992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.145017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.145154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.145184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.145372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.145398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.145553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.145578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.145700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.145725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.145916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.844 [2024-07-25 07:32:27.145941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.844 qpair failed and we were unable to recover it. 00:26:54.844 [2024-07-25 07:32:27.146059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.146083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.146212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.146238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.146434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.146460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.146587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.146616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.146749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.146793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.146958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.146986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.147135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.147160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.147358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.147384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.147508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.147551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.147694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.147723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.147847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.147872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.148051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.148080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.148222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.148253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.148381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.148405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.148587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.148615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.148787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.148811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.148934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.148975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.149149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.149176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.149354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.149381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.149505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.149545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.149715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.149743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.149911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.149936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.150103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.150131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.150282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.150311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.150485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.150510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.150685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.150713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.150841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.150868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.151049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.151073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.151255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.151283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.151450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.151475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.151628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.151653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.151783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.151808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.151964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.151989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.152132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.152159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.152335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.152363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.152480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.152506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.152626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.152654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.152808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.152832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.153013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.153037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.153151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.153175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.153298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.153324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.153498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.153525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.153664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.153689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.153872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.153914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.154074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.154101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.154272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.154297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.154472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.154499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.154668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.154697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.154855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.154880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.155008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.155034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.155236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.155271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.155437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.155462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.155666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.155694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.155886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.155913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.156092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.156116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.156292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.156322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.156461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.156489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.156660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.156685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.156836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.156861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.157021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.157048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.157254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.157280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.157428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.157453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.157650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.157678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.157853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.157878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.158012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.158037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.158218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.158269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.158413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.845 [2024-07-25 07:32:27.158438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.845 qpair failed and we were unable to recover it. 00:26:54.845 [2024-07-25 07:32:27.158634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.158662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.158829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.158857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.159004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.159030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.159239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.159274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.159442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.159469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.159640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.159664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.159866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.159893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.160066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.160091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.160238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.160271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.160450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.160478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.160681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.160706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.160830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.160854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.161033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.161058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.161222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.161259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.161410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.161435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.161615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.161640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.161781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.161809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.161955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.161983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.162145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.162171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.162325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.162351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.162516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.162541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.162694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.162719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.162860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.162887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.163086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.163111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.163247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.163289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.163484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.163509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.163690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.163714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.163918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.163945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.164108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.164135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.164308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.164334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.164493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.164533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.164696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.164724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.164866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.164891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.165053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.165078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.165210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.165236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.165401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.165426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.165558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.165599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.165737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.165769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.165928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.165953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.166150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.166178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.166353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.166379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.166526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.166551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.166679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.166705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.166884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.166910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.167092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.167116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.167238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.167289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.167450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.167478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.167688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.167714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.167860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.167888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.168061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.168089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.168267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.168293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.168443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.168472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.168672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.168700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.168846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.168872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.169011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.169053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.169237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.169267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.169420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.169445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.169601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.169644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.169836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.169865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.170035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.170061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.170226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.170263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.170427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.170452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.170600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.170625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.846 qpair failed and we were unable to recover it. 00:26:54.846 [2024-07-25 07:32:27.170758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.846 [2024-07-25 07:32:27.170783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.170938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.170967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.171125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.171150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.171358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.171387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.171517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.171544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.171688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.171713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.171908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.171935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.172110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.172138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.172310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.172336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.172517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.172542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.172718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.172745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.172937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.172962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.173093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.173118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.173265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.173297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.173482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.173518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.173729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.173757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.173930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.173955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.174111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.174135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.174313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.174341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.174514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.174543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.174737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.174762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.174963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.174991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.175184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.175212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.175394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.175419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.175561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.175589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.175734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.175762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.175926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.175951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.176155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.176183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.176389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.176415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.176570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.176595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.176727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.176753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.176917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.176942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.177087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.177116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.177306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.177333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.177485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.177535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.177698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.177722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.177883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.177908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.178076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.178104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.178287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.178314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.178474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.178499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.178626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.178652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.178811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.178836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.179028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.179062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.179225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.179258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.179422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.179448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.179604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.179630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.179809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.179834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.179996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.180022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.180177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.180204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.180443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.180471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.180639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.180663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.180843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.180869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.181016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.181040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.181223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.181255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.181412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.181437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.181574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.181599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.181761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.181786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.181940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.181965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.182127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.182155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.182309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.182335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.182497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.182536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.182698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.182725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.182857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.847 [2024-07-25 07:32:27.182882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.847 qpair failed and we were unable to recover it. 00:26:54.847 [2024-07-25 07:32:27.183015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.183042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.183169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.183195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.183393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.183419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.183576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.183602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.183767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.183792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.183923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.183949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.184111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.184137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.184264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.184291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.184447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.184473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.184652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.184677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.184825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.184851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.184976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.185002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.185165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.185191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.185345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.185371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.185526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.185681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.185708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.185857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.185884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.186016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.186041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.186238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.186287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.186413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.186443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.186601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.186626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.186807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.186833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.186986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.187011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.187137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.187162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.187315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.187341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.187486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.187511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.187675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.187700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.187832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.187858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.188035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.188060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.188253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.188295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.188444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.188470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.188620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.188645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.188777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.188803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.188961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.188987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.189145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.189173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.189304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.189331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.189511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.189537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.189685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.189710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.189860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.189886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.190037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.190064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.190252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.190296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.190455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.190481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.190647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.190672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.190803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.190828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.190982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.191007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.191183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.191208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.191347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.191373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.191532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.191557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.191682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.191708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.191830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.191856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.192010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.192035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.192183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.192208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.192371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.192397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.192552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.192577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.192724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.192749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.192904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.192929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.193140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.193168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.193336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.193362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.193515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.193540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.193700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.193729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.193881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.193907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.194088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.194113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.194239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.194269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.194446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.194472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.194624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.194650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.194801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.194826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.194969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.194994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.848 [2024-07-25 07:32:27.195130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.848 [2024-07-25 07:32:27.195158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.848 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.195301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.195327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.195509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.195534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.195690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.195715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.195898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.195923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.196050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.196075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.196230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.196261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.196416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.196441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.196630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.196655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.196833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.196858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.196987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.197013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.197185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.197214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.197420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.197446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.197606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.197632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.197783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.197808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.197959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.197984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.198154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.198183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.198381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.198407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.198541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.198566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.198726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.198752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.198905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.198931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.199108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.199133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.199291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.199317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.199467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.199492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.199615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.199641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.199771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.199797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.199951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.199976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.200123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.200148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.200310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.200336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.200483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.200508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.200697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.200722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.200880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.200906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.201058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.201089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.201265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.201308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.201488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.201514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.201627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.201787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.201812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.201980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.202005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.202155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.202180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.202329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.202355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.202516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.202541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.202666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.202691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.202853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.202878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.203040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.203067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.203270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.203312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.203440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.203466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.203619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.203644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.203821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.203846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.203997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.204022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.204172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.204197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.204331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.204358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.204513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.204539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.204689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.204714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.204831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.204856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.204980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.205005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.205181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.205210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.205407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.205434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.205566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.205592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.205746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.205772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.205927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.205953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.206108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.206133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.206263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.206289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.206442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.206468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.206589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.206615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.206741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.206766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.206914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.206939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.207085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.207113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.207297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.207324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.849 [2024-07-25 07:32:27.207482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.849 [2024-07-25 07:32:27.207507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.849 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.207643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.207668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.207808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.207834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.208001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.208026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.208152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.208182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.208375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.208401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.208553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.208579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.208731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.208756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.208910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.208935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.209110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.209135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.209266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.209292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.209415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.209441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.209601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.209626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.209750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.209776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.209959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.209984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.210165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.210190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.210379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.210405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.210529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.210554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.210714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.210739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.210912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.210938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.211087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.211116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.211326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.211352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.211494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.211519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.211668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.211693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.211851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.211876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.212004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.212029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.212152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.212177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.212333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.212359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.212519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.212544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.212679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.212704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.212884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.212909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.213059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.213085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.213264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.213308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.213463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.213488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.213635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.213660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.213782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.213808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.213990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.214015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.214138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.214163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.214353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.214379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.214498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.214524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.214702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.214727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.214880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.214905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.215055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.215080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.215225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.215261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.215419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.215452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.215641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.215667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.215817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.215842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.215997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.216023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.216174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.216200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.216386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.216412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.216564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.216589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.216770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.216795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.216925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.216951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.217126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.217154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.217319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.217345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.217470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.217495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.217622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.217647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.217808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.217833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.217956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.217982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.218113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.218138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.218302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.218328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.218455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.218480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.218615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.218640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.218760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.218785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.218932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.218957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.219161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.850 [2024-07-25 07:32:27.219190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.850 qpair failed and we were unable to recover it. 00:26:54.850 [2024-07-25 07:32:27.219335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.219360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.219478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.219504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.219695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.219721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.219871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.219896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.220056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.220082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.220255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.220294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.220427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.220454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.220595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.220621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.220779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.220807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.220962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.220989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.221118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.221144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.221308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.221334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.221459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.221486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.221641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.221667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.221795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.221821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.222003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.222028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.222210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.222235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.222406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.222432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.222614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.222647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.222769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.222807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.222967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.222994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.223145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.223172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.223343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.223387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.223567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.223609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.223780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.223823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.223952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.223978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.224103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.224129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.224338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.224382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.224561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.224608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.224809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.224852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.225015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.225040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.225216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.225247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.225413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.225456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.225603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.225645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.225788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.225832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.226012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.226038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.226171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.226199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.226385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.226429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.226585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.226628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.226844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.226870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.227024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.227050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.227188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.227215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.227752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.227781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.227975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.228001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.228125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.228151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.228335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.228362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.228552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.228578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.228774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.228818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.229007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.229034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.229165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.229192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.229377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.229422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.229640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.229683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.229863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.229906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.230092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.230118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.230255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.230283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.230476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.230502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.230764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.230812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.230958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.231001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.231154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.231184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.231375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.231420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.231564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.231606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.231743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.231768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.231927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.231953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.232131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.232156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.232313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.232356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.232509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.232552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.232728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.232769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.232942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.232985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.233138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.233163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.233342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.233385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.233552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.233596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.233787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.233813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.851 [2024-07-25 07:32:27.233947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.851 [2024-07-25 07:32:27.233972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.851 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.234087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.234113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.234297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.234324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.234451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.234476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.234628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.234653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.234788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.234813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.234967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.234993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.235115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.235140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.235300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.235327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.235471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.235517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.235657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.235700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.235881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.235906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.236087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.236113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.236262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.236292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.236450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.236492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.236701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.236743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.236938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.236963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.237124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.237149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.237290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.237319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.237506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.237534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.237701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.237743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.237896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.237921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.238080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.238104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.238314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.238357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.238506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.238548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.238734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.238777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.238939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.238965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.239120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.239146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.239271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.239297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.239456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.239500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.239713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.239755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.239914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.239938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.240064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.240090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.240273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.240316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.240484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.240511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.240717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.240760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.240945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.240969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.241095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.241120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.241298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.241326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.241487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.241515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.241711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.241752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.241905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.241930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.242079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.242105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.242253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.242278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.242430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.242473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.242650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.242692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.242824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.242850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.243011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.243036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.243183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.243207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.243401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.243446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.243626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.243669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.243842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.243885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.244042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.244068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.244207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.244253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.244434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.244482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.244662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.244706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.244915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.244958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.245134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.245159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.245350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.245393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.245544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.245586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.245751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.245794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.245968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.246012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.246200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.246226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.246402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.246446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.246632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.246674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.852 qpair failed and we were unable to recover it. 00:26:54.852 [2024-07-25 07:32:27.246851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.852 [2024-07-25 07:32:27.246902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.247072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.247098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.247251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.247296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.247445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.247488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.247658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.247701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.247908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.247951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.248133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.248158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.248344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.248387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.248533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.248575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.248781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.248824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.249034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.249077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.249232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.249263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.249435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.249462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.249621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.249647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.249850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.249893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.250048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.250073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.250224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.250254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.250401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.250426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.250612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.250654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.250835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.250876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.251021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.251063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.251187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.251213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.251392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.251436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.251583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.251630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.251838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.251881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.252034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.252059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.252182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.252207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.252356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.252401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.252621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.252669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.252839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.252881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.253007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.253033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.253186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.253210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.253392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.253419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.253597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.253640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.253842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.253884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.254005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.254030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.254193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.254219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.254396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.254440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.254621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.254663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.254816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.254857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.255009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.255034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.255154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.255179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.255349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.255376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.255507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.255533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.255707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.255750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.255936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.255961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.256139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.256164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.256342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.256386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.256535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.256579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.256757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.256800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.256955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.256981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.257116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.257141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.257325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.257370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.257543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.257586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.257758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.257800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.257986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.258011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.258164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.258189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.258382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.258427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.258599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.258843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.258872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.259044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.259069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.259251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.259276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.259450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.259493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.853 qpair failed and we were unable to recover it. 00:26:54.853 [2024-07-25 07:32:27.259663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.853 [2024-07-25 07:32:27.259706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.259908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.259950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.260099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.260124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.260255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.260280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.260426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.260469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.260634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.260680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.260882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.260910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.261103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.261128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.261315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.261358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.261512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.261552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.261758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.261801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.261979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.262026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.262153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.262179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.262326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.262369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.262538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.262582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.262755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.262800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.262945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.262970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.263094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.263120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.263308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.263353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.263501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.263545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.263750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.263793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.263974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.264000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.264151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.264176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.264345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.264388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.264568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.264610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.264821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.264863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.265017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.265044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.265205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.265230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.265417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.265458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.265631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.265674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.265844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.265886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.266007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.266032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.266163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.266188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.266351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.266376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.266527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.266552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.266738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.266763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.266937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.266962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.267138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.267163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.267313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.267357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.267537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.267579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.267784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.267826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.268001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.268026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.268224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.268255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.268399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.268444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.268642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.268685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.268897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.268943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.269094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.269119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.269284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.269312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.269473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.269519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.269721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.269764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.269938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.269979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.270104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.270131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.270337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.270380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.270528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.270570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.270746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.270788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.270907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.270932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.271057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.271083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.271286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.271315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.271522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.271549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.271769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.271811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.271997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.272040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.272191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.272217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.272399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.272441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.272622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.272668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.272837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.272879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.273027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.273053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.273182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.273208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.273419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.273462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.273641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.854 [2024-07-25 07:32:27.273684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.854 qpair failed and we were unable to recover it. 00:26:54.854 [2024-07-25 07:32:27.273857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.273900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.274063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.274088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.274240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.274270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.274441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.274483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.274668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.274920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.274963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.275119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.275144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.275289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.275316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.275482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.275523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.275701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.275743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.275911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.275954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.276135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.276160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.276320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.276345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.276493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.276518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.276681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.276709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.276900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.276928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.277072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.277312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.277338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.277512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.277554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.277721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.277763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.277897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.277924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.278079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.278104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.278301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.278343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.278518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.278561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.278718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.278760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.278886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.278911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.279027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.279053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.279202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.279228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.279439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.279468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.279664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.279706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.279858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.279901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.280055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.280082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.280267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.280293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.280449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.280492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.280663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.280705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.280877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.280919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.281071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.281096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.281367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.281397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.281600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.281644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.281801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.281844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.282024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.282049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.282205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.282230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.282410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.282452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.282631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.282674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.282848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.282890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.283073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.283098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.283272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.283315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.283456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.283485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.283679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.283723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.283935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.283977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.284099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.284124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.284307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.284360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.855 [2024-07-25 07:32:27.284536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.855 [2024-07-25 07:32:27.284578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.855 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.284768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.284794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.284950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.284975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.285161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.285186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.285385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.285435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.285632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.285658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.285804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.285846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.285998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.286023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.286178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.286203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.286381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.286424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.286572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.286614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.286790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.286832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.286985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.287010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.287158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.287183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.287369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.287411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.287608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.287637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.287823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.287866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.288002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.288027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.288182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.288207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.288390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.288437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.288646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.288687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.288900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.288942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.289104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.289129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.289266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.289292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.289494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.289522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.289738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.289766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.289960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.290002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.290157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.290182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.290395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.290423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.290619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.290840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.290883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.291043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.291068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.291197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.291222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.291399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.291443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.291613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.291641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.291835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.291882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.292010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.292037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.292167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.292192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.292341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.292386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.292571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.292615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.292792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.292835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.292961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.292986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.293140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.293165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.293318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.293361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.293507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.293554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.293732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.293773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.293947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.293989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.294145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.294171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.294351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.294377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.294519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.294561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.294767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.294810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.294936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.294962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.295119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.295144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.295403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.295450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.295652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.295695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.295865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.295908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.296042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.296068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.296248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.296273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.296451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.296494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.296614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.296639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.296784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.296829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.297008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.297033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.297185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.856 [2024-07-25 07:32:27.297210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.856 qpair failed and we were unable to recover it. 00:26:54.856 [2024-07-25 07:32:27.297395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.297440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.297585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.297629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.297813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.297855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.297977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.298002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.298154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.298179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.298358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.298402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.298583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.298624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.298772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.298815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.298971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.298996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.299151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.299349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.299391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.299594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.299622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.299811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.299853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.300010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.300034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.300162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.300187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.300365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.300407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.300574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.300616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.300789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.300831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.300950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.300976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.301126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.301150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.301292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.301321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.301539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.301585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.301770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.301813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.301971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.301996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.302174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.302200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.302369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.302412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.302603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.302629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.302805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.302848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.303002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.303028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.303211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.303236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.303415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.303458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.303638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.303681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.303860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.303903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.304027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.304053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.304182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.304208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.304386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.304429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.304597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.304640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.304851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.304894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.305076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.305101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.305264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.305291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.305468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.305512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.305657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.305704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.305871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.305913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.306057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.306083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.306211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.306237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.306373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.306400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.306604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.306647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.306821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.306864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.306988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.307015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.307195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.307220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.307407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.307450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.307626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.307669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.307811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.307854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.308007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.308031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.308195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.308221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.308371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.308416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.308624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.308651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.308871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.308915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.309040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.309064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.309219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.309248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.309432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.309476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.857 qpair failed and we were unable to recover it. 00:26:54.857 [2024-07-25 07:32:27.309653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.857 [2024-07-25 07:32:27.309685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.309846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.309888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.310039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.310064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.310246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.310271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.310448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.310491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.310664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.310707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.310884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.310926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.311082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.311108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.311263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.311290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.311458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.311501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.311679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.311723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.311893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.311935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.312065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.312089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.312273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.312318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.312466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.312509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.312688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.312730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.312935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.312978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.313130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.313155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.313327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.313371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.313574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.313617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.313783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.313826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.313984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.314008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.314197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.314221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.314398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.314441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.314616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.314660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.314842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.314885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.315066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.315091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.315253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.315281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.315467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.315509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.315713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.315755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.315933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.315976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.316158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.316183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.316356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.316399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.316567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.316608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.316819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.316862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.316985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.317011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.317163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.317188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.317332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.317360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.317543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.317585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.317853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.317881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.318031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.318061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.318237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.318267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.318441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.318483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.318625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.318653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.318849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.318892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.319055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.319081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.319214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.319238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.319422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.319463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.319613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.319655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.319839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.319882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.320009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.320033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.320188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.858 [2024-07-25 07:32:27.320212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.858 qpair failed and we were unable to recover it. 00:26:54.858 [2024-07-25 07:32:27.320366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.320409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.320552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.320580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.320810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.320852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.321033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.321057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.321212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.321237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.321375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.321403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.321584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.321626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.321795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.321837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.322016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.322059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.322187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.322213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.322402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.322449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.322595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.322639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.322821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.322863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.323040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.323065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.323218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.323249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.323447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.323489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.323689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.323719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.323883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.323912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.324089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.324115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.324292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.324320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.324480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.324508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.324811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.324839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.325005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.325030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.325175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.325199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.325411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.325440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.325645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.325673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.325849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.325876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.326048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.326073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.326252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.326283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.326432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.326459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.326720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.326769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.326958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.326986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.327126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.327153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.327329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.327358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.327545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.327573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.327775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.327803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.327977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.328002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.328180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.328205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.328380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.328409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.328631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.328659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.328844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.328872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.329010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.329035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.329188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.329213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.329398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.329427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.329631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.329660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.329892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.329917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.330070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.330272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.330298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.330628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.330687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.330880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.330908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.331083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.331108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.331247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.331289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.331432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.331459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.331653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.331681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.331864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.331890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.332049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.332074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.332230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.332276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.859 qpair failed and we were unable to recover it. 00:26:54.859 [2024-07-25 07:32:27.332510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.859 [2024-07-25 07:32:27.332538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.332737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.332765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.332945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.332970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.333126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.333151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.333328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.333357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.333525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.333553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.333746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.333973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.333998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.334150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.334177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.334370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.334396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.334587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.334613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.334771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.334801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.334935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.334962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.335090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.335115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.335265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.335307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.335539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.335568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.335793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.335821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.335987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.336012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.336174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.336199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.336341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.336370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.336566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.336595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.336765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.336806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:54.860 qpair failed and we were unable to recover it. 00:26:54.860 [2024-07-25 07:32:27.336925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.860 [2024-07-25 07:32:27.336951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-07-25 07:32:27.337102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-07-25 07:32:27.337127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-07-25 07:32:27.337303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-07-25 07:32:27.337332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-07-25 07:32:27.337548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.337598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.337797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.337822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.337972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.337998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.338176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.338201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.338384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.338412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.338586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.338614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.338785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.338814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.338985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.339009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.339161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.339187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.339330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.339359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.339512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.339540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.339746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.339773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.339922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.339947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.340082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.340106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.340262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.340305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.340516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.340544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.340713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.340740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.340882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.340908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.341058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.341083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.341212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.341237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.341380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.341405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.341531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.341555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.341683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.341709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.341887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.341912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.342063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.342087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.342231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.342266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.342412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.342437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.342598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.342622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.342777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.342802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.342955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.342980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.343161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.343185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.343362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.343388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.343540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.343565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.343738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.343762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-07-25 07:32:27.343888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-07-25 07:32:27.343912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.344070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.344095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.344287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.344329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.344451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.344475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.344625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.344649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.344797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.344974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.344999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.345155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.345180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.345308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.345334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.345496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.345522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.345654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.345679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.345808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.345832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.346012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.346036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.346216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.346451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.346476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.346597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.346622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.346775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.346799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.346945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.346970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.347118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.347147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.347351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.347380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.347563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.347589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.347741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.347767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.347946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.347971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.348089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.348114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.348272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.348298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.348479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.348503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.348662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.348687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.348849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.348874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.349008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.349034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.349214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.349253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.349404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.349430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.349588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.349614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.349764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.349788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.349949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.349973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.350155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.350183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.350363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.350388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.350520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-07-25 07:32:27.350544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-07-25 07:32:27.350669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.350696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.350872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.350896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.351050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.351075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.351261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.351287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.351417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.351442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.351584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.351608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.351759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.351784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.351962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.351987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.352167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.352191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.352364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.352390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.352568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.352593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.352755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.352780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.352963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.352988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.353135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.353162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.353298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.353324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.353482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.353506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.353683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.353708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.353886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.353911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.354090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.354116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.354271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.354297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.354421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.354445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.354626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.354650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.354808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.354837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.354971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.354996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.355176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.355201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.355355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.355382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.355566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.355591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.355779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.355803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.355956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.355982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.356139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.356165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.356315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.356342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.356479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.356504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.356628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.356653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.356807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.356832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.356954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-07-25 07:32:27.356979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-07-25 07:32:27.357126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.357155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.357347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.357373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.357520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.357544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.357695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.357720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.357875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.357900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.358057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.358081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.358214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.358247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.358414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.358439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.358617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.358641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.358816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.358840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.358993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.359018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.359166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.359196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.359375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.359401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.359538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.359564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.359748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.359773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.359928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.359952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.360131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.360156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.360325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.360351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.360525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.360549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.360733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.360757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.360911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.360935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.361088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.361113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.361299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.361324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.361502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.361526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.361655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.361679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.361810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.361835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.361980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.362004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.362156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.362184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.362339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.362364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.362519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.362544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.362691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.362715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.362905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.362930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.363080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.363290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.363316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.363474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.363499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.363654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-07-25 07:32:27.363679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-07-25 07:32:27.363855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.363880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.364010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.364035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.364185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.364210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.364344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.364369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.364550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.364575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.364760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.364784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.364916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.364943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.365146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.365174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.365345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.365370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.365517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.365542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.365696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.365721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.365899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.365924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.366074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.366099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.366255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.366280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.366429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.366454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.366605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.366630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.366782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.366806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.366961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.366985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.367170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.367199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.367377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.367402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.367557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.367583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.367711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.367736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.367885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.367909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.368085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.368110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.368260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.368303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.368461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.368486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.368634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.368658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.368806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.368831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.368960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.368985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-07-25 07:32:27.369160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-07-25 07:32:27.369185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.369365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.369392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.369530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.369559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.369680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.369885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.369910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.370088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.370117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.370262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.370289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.370463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.370488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.370666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.370691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.370845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.370870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.371030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.371056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.371179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.371205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.371363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.371389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.371515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.371540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.371675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.371700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.371847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.371872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.372021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.372047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.372224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.372258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.372428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.372454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.372601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.372627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.372783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.372808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.372935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.372960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.373118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.373143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.373302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.373328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.373478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.373503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.373650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.373675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.373808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.373833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.373982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.374007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.374153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.374181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.374368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.374394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.374577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.374602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.374728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.374754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.374880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.374905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.375056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.375081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.375228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.375258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.375389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.375414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-07-25 07:32:27.375593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-07-25 07:32:27.375618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.375777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.375801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.375953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.375978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.376148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.376176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.376345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.376371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.376506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.376533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.376690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.376720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.376872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.376896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.377023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.377049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.377169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.377194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.377370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.377396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.377549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.377574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.377726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.377750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.377926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.377951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.378108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.378133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.378256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.378281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.378463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.378488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.378643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.378668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.378843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.378868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.379024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.379049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.379237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.379288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.379411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.379435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.379631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.379659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.379824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.379852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.380953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.380978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.381101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.381127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.381311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.381337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.381490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.381515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.381695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.381721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.381842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-07-25 07:32:27.381868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-07-25 07:32:27.382021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.382047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.382212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.382239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.382417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.382442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.382591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.382615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.382772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.382796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.382946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.382971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.383149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.383174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.383296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.383322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.383505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.383530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.383657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.383682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.383841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.383870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.384004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.384029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.384223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.384257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.384454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.384479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.384664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.384689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.384842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.384866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.385020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.385045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.385224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.385261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.385408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.385433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.385604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.385628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.385762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.385788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.385948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.385972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.386147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.386174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.386352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.386378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.386539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.386564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.386715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.386739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.386920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.386945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.387101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.387127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.387251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.387276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.387456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.387481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.387609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.387634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.387813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.387838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.387959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.387983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.388157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.388185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-07-25 07:32:27.388373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-07-25 07:32:27.388399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.388578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.388602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.388758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.388783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.388914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.388939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.389075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.389100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.389220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.389251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.389411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.389565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.389589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.389739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.389764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.389917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.389943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.390126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.390155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.390323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.390348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.390473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.390497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.390644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.390668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.390791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.390816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.390944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.390969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.391125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.391155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.391283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.391308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.391463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.391488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.391632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.391656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.391807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.391831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.391985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.392010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.392220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.392253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.392403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.392428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.392578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.392603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.392758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.392783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.392933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.392957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.393109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.393133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.393289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.393315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.393439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.393464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.393652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.393676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.393806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.393830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.393980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.394004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.394181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.394208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.394389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.394414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.394567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.394592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-07-25 07:32:27.394711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-07-25 07:32:27.394735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.394914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.394938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.395083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.395111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.395298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.395324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.395478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.395502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.395656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.395682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.395861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.395886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.396053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.396078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.396233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.396263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.396388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.396413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.396571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.396596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.396747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.396771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.396950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.396974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.397175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.397202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.397380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.397405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.397596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.397620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.397804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.397828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.398012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.398037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.398207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.398235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.398414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.398439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.398594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.398622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.398774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.398799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.398965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.398991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.399144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.399172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.399362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.399387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.399544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.399568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.399697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.399723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.399852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.399876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.400026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.400051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.400192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-07-25 07:32:27.400216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-07-25 07:32:27.400354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.400379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.400560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.400586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.400710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.400734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.400894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.400919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.401091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.401119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.401292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.401318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.401496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.401521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.401680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.401705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.401881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.401906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.402057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.402082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.402248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.402273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.402427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.402451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.402583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.402608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.402763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.402789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.402936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.402960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.403131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.403159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.403329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.403355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.403480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.403505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.403655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.403682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.403836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.403861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.404006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.404031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.404183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.404209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.404335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.404361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.404517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.404541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.404719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.404744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.404906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.404931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.405123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.405151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.405344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.405370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.405500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.405526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.405675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.405699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.405850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.405879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.406014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.406039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.406183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.406207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.406362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.406387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.406542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.406567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.406718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.406743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-07-25 07:32:27.406890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-07-25 07:32:27.406914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.407090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.407117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.407355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.407380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.407495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.407520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.407676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.407700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.407822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.407846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.407997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.408022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.408149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.408174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.408335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.408360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.408515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.408541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.408695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.408720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.408886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.408912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.409069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.409094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.409313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.409339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.409491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.409515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.409634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.409659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.409839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.409864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.410010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.410035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.410160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.410185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.410320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.410347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.410469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.410493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.410648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.410673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.410827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.410852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.411005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.411030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.411209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.411237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.411387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.411412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.411558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.411582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.411758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.411783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.411942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.411967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.412149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.412176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.412324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.412351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.412536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.412561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.412715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.412740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.412890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.412915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.413047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.413076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.413228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.413268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-07-25 07:32:27.413413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-07-25 07:32:27.413438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.413561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.413587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.413711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.413736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.413895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.413920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.414094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.414122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.414295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.414321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.414500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.414525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.414649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.414674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.414822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.414849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.415031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.415056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.415214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.415240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.415399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.415424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.415582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.415606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.415765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.415789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.415937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.415962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.416162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.416190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.416338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.416364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.416511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.416537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.416697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.416723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.416880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.416905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.417030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.417054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.417236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.417293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.417451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.417476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.417603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.417628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.417794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.417819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.417986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.418011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.418164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.418189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.418344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.418369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.418553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.418578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.418763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.418788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.418920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.418945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.419124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.419150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.419321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.419346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.419492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.419517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.419672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.419697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.419849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.419873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.419996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.420020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-07-25 07:32:27.420172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-07-25 07:32:27.420197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.420325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.420356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.420493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.420518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.420673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.420697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.420876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.420900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.421054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.421079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.421256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.421299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.421455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.421480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.421635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.421659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.421816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.421841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.421994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.422019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.422177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.422202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.422360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.422384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.422544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.422571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.422718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.422743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.422901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.422925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.423074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.423115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.423293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.423319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.423475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.423500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.423646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.423671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.423800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.423825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.423973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.423998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.424152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.424177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.424357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.424383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.424504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.424528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.424648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.424672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.424820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.424844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.425025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.425049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.425204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.425232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.425422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.425447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.425600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.425624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.425809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.425834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.426033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.426058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.426211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.426235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.426393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.426571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.426596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-07-25 07:32:27.426750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-07-25 07:32:27.426774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.426950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.426974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.427151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.427178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.427356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.427382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.427533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.427558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.427688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.427716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.427844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.427869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.428047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.428072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.428225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.428255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.428439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.428464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.428643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.428668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.428850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.428875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.429032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.429057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.429233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.429267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.429435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.429460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.429641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.429666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.429816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.429841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.429997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.430022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.430177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.430203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.430342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.430369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.430524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.430549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.430706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.430731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.430879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.430903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.431030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.431055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.431205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.431232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.431405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.431429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.431581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.431606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.431724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.431748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.431876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.431901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.432051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.432077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.432226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.432259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-07-25 07:32:27.432424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-07-25 07:32:27.432448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.432587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.432612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.432790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.432815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.432948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.432973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.433116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.433142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.433330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.433356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.433507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.433531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.433682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.433707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.433890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.433915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.434102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.434127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.434280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.434306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.434468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.434493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.434647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.434671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.434826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.434851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.435033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.435057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.435239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.435272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.435447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.435472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.435636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.435660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.435841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.435865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.436041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.436065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.436250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.436276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.436419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.436444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.436617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.436641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.436762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.436786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.436937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.436961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.437121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.437148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.437306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.437332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.437488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.437513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.437669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.437694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.437853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.437877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.438028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.438053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.438174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.438198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.438366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.438391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.438578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.438604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.438756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.438781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.438934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.438958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.439111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-07-25 07:32:27.439138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-07-25 07:32:27.439330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.439356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.439484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.439509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.439692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.439717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.439864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.439889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.440047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.440076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.440225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.440253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.440432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.440457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.440591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.440617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.440798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.440822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.440998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.441022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.441182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.441210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.441385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.441411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.441544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.441569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.441718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.441742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.441918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.441943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.442142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.442170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.442340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.442365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.442496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.442521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.442651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.442677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.442829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.442854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.443013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.443038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.443189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.443214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.443349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.443374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.443571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.443597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.443750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.443774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.443925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.443949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.444148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.444176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.444346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.444372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.444527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.444553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.444704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.444729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.444850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.444875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.445006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.445031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.445151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.445177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.445334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.445359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.445512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.445536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.445765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.445791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-07-25 07:32:27.445949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-07-25 07:32:27.445974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.446123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.446151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.446322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.446348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.446578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.446603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.446755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.446781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.446932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.446957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.447109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.447134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.447291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.447317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.447465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.447493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.447641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.447666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.447848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.447873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.448002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.448026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.448204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.448233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.448414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.448440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.448622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.448647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.448801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.448827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.448985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.449010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.449166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.449190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.449342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.449368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.449521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.449546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.449694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.449719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.449870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.449895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.450050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.450078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.450354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.450380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.450528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.450553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.450738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.450763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.450942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.450967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.451101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.451126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.451314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.451340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.451494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.451519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.451640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.451665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.451817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.451842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.451995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.452020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.452176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.452204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.452384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.452409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.452572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.452597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.452730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-07-25 07:32:27.452755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-07-25 07:32:27.452914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.452939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.453107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.453135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.453303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.453328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.453455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.453480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.453712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.453737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.453892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.453918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.454096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.454122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.454272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.454298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.454423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.454449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.454635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.454660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.454785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.454810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.454963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.454993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.455151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.455177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.455330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.455355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.455538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.455563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.455691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.455716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.455892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.455917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.456094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.456119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.456281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.456308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.456459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.456484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.456637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.456663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.456845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.456870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.456997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.457022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.457223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.457256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.457445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.457470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.457602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.457628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.457785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.457811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.457935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.457961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.458081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.458106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.458284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.458310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.458465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.458490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.458645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.458671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.458825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.458850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.459033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-07-25 07:32:27.459057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-07-25 07:32:27.459225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.459260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.459435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.459460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.459587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.459612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.459735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.459760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.459894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.459918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.460070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.460094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.460220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.460380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.460405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.460564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.460589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.460748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.460773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.460932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.460957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.461134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.461158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.461309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.461335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.461463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.461488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.461639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.461663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.461841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.461865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.462011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.462036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.462191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.462220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.462405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.462429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.462588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.462612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.462801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.462826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.462981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.463007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.463161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.463186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.463319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.463344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.463494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.463520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.463686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.463710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.463867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.463891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.464026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.464051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.464202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.464230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.464412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.464436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.464590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-07-25 07:32:27.464614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.159 qpair failed and we were unable to recover it. 00:26:55.159 [2024-07-25 07:32:27.464794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.464820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.464979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.465004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.465159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.465184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.465341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.465366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.465539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.465564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.465692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.465718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.465867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.465891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.466017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.466042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.466284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.466310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.466460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.466484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.466644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.466668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.466817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.466843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.466973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.466997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.467131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.467155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.467283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.467308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.467457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.467481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.467604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.467628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.467748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.467773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.467957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.467981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.468179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.468206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.468354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.468381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.468536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.468561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.468686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.468710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.468860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.468886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.469012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.469037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.469202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.469229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.469408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.469437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.469590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.469615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.469763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.469788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.469937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.469963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.470142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.470166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.470297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.470322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.470437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.470462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.470641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.470666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.470824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.470849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.160 qpair failed and we were unable to recover it. 00:26:55.160 [2024-07-25 07:32:27.470999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-07-25 07:32:27.471024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.471193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.471223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.471405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.471433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.471587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.471611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.471760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.471785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.471945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.471970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.472173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.472202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.472377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.472402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.472530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.472555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.472731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.472756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.472883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.472907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.473087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.473115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.473327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.473352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.473482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.473507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.473660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.473685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.473844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.473869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.474026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.474050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.474206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.474231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.474390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.474414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.474563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.474587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.474743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.474768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.474897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.474922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.475105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.475131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.475306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.475332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.475491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.475515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.475648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.475673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.475804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.475830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.475983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.476131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.476155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.476312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.476337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.476490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.476515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.476646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.476675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.476830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.476856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.477041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.477066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.477204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.477232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.477383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.161 [2024-07-25 07:32:27.477408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.161 qpair failed and we were unable to recover it. 00:26:55.161 [2024-07-25 07:32:27.477564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.477589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.477716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.477740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.477878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.477904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.478053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.478079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.478228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.478267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.478422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.478446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.478601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.478625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.478772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.478798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.478951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.478975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.479133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.479158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.479324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.479351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.479507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.479531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.479709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.479733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.479911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.479936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.480103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.480131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.480276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.480301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.480482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.480507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.480638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.480664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.480819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.480844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.480996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.481021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.481176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.481205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.481410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.481436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.481568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.481592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.481722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.481746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.481898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.481924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.482076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.482104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.482322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.482348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.482522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.482547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.482670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.482694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.482905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.482930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.483077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.483102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.483256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.483281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.483437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.483461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.483610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.483635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.483785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.483810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.483960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.483988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.484190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.162 [2024-07-25 07:32:27.484218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.162 qpair failed and we were unable to recover it. 00:26:55.162 [2024-07-25 07:32:27.484373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.484398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.484559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.484584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.484700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.484724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.484874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.484900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.485081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.485106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.485285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.485309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.485500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.485525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.485680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.485705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.485867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.485892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.486044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.486068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.486216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.486258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.486454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.486480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.486613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.486639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.486824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.486848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.486996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.487038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.487232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.487268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.487471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.487496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.487648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.487674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.487826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.487850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.487982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.488008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.488165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.488189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.488340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.488365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.488517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.488542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.488695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.488720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.488873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.488897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.489121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.489173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.489385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.489429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.489643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.489686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.490010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.490062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.490249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.490275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.490449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.490492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.490670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.490718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.490899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.490942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.491096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.491122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.491293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.491323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.491546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.491589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.491769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.491811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.492021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.492063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.492194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.163 [2024-07-25 07:32:27.492224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.163 qpair failed and we were unable to recover it. 00:26:55.163 [2024-07-25 07:32:27.492382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.492425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.492631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.492674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.492845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.492887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.493024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.493049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.493207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.493233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.493443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.493472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.493689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.493732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.493965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.494014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.494163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.494188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.494373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.494416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.494574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.494602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.494785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.494827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.494999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.495040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.495205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.495231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.495390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.495439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.495617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.495660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.495837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.495880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.496056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.496082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.496230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.496261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.496437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.496482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.496685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.496728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.496870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.496913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.497094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.497119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.497273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.497300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.497479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.497521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.497691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.497718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.497936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.497978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.498132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.498157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.498333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.498376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.498577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.498619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.498792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.498834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.498962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.498987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.164 [2024-07-25 07:32:27.499141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.164 [2024-07-25 07:32:27.499165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.164 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.499368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.499411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.499675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.499727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.499929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.499972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.500124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.500149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.500320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.500363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.500538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.500584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.500759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.500806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.500975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.501004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.501166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.501190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.501386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.501430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.501667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.501709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.501879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.501922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.502048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.502074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.502230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.502436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.502480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.502661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.502703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.502887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.502935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.503094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.503119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.503314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.503359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.503544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.503586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.503758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.503799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.503949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.503975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.504155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.504180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.504357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.504400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.504600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.504629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.504964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.505017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.505203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.505228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.505408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.505451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.505629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.505656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.505852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.505895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.506057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.506082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.506238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.506285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.506460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.506506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.506693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.506720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.506897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.506939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.507115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.507140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.507295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.507321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.165 qpair failed and we were unable to recover it. 00:26:55.165 [2024-07-25 07:32:27.507514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.165 [2024-07-25 07:32:27.507540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.507755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.507797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.507964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.508006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.508174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.508200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.508380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.508423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.508582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.508625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.508828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.508869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.509046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.509071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.509196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.509221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.509377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.509426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.509602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.509651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.509829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.509872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.510010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.510036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.510192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.510218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.510403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.510446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.510624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.510666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.510866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.510893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.511055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.511080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.511225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.511259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.511436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.511478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.511652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.511695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.511871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.511899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.512064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.512089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.512270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.512297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.512475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.512518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.512684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.512710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.512918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.512960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.513110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.513136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.513334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.513380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.513582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.513625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.513769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.513796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.513938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.513963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.514112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.514137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.514336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.514378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.514574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.514600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.514753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.514780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.514918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.515122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.515147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.515279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.515305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.515433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.515458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.166 [2024-07-25 07:32:27.515608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.166 [2024-07-25 07:32:27.515633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.166 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.515755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.515780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.515959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.515985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.516165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.516190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.516379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.516422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.516630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.516674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.516841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.516884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.517068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.517094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.517319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.517365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.517554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.517601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.517810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.517853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.518034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.518059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.518181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.518205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.518375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.518419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.518601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.518644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.518847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.518890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.519038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.519063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.519220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.519250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.519428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.519470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.519645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.519688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.519849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.519892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.520041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.520065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.520217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.520247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.520426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.520469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.520610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.520652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.520858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.520900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.521067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.521092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.521266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.521308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.521483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.521528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.521705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.521748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.521919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.521961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.522113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.522138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.522316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.522358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.522493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.522535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.522735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.522763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.522944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.522970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.523128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.523153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.523331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.523356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.523537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.523562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.523736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.523761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.523941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.523984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.524108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.524132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.167 [2024-07-25 07:32:27.524331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.167 [2024-07-25 07:32:27.524359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.167 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.524570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.524597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.524794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.524836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.524965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.524990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.525145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.525170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.525349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.525391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.525567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.525609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.525779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.525826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.525983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.526009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.526162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.526187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.526365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.526407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.526552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.526594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.526767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.526809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.526963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.526987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.527166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.527191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.527370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.527414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.527590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.527637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.527843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.527887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.528052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.528077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.528253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.528278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.528450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.528491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.528701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.528743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.528885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.528927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.529108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.529133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.529302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.529330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.529498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.529540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.529725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.529769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.529972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.530014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.530161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.530186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.530396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.530439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.530618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.530661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.530835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.530878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.531034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.531058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.531237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.531267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.531426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.531454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.531648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.531693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.531833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.531875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.168 qpair failed and we were unable to recover it. 00:26:55.168 [2024-07-25 07:32:27.532035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.168 [2024-07-25 07:32:27.532060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.532193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.532219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.532371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.532415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.532566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.532607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.532801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.532843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.532991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.533016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.533166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.533190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.533368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.533410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.533587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.533631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.533813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.533855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.533988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.534013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.534198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.534224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.534402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.534444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.534627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.534675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.534847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.534890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.535068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.535093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.535299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.535325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.535471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.535513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.535687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.535730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.535940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.535982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.536137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.536162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.536309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.536352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.536528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.536570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.536716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.536760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.536916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.536959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.537117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.537142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.537318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.537360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.537533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.537576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.537781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.537824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.537981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.538007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.538156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.538181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.538390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.538434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.538608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.538652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.538797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.538839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.539024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.539050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.539204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.539229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.539375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.539417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.539563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.539610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.539784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.539827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.540015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.540040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.540228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.540258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.540435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.169 [2024-07-25 07:32:27.540478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.169 qpair failed and we were unable to recover it. 00:26:55.169 [2024-07-25 07:32:27.540652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.540680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.540896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.540924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.541090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.541115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.541289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.541319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.541548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.541591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.541776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.541821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.541973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.541998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.542120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.542145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.542318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.542362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.542537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.542581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.542737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.542779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.542930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.542954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.543111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.543137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.543270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.543296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.543437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.543479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.543650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.543677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.543881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.543907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.544069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.544095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.544309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.544350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.544548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.544590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.544797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.544839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.544989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.545014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.545178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.545204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.545389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.545431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.545581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.545608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.545799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.545840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.545988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.546013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.546166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.546192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.546386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.546430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.546596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.546640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.546820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.546863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.546988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.547012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.547163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.547188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.547366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.547410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.547615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.547657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.547830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.547876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.547997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.548021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.548179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.548203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.548357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.548401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.548606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.548648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.548854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.548896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.549049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.549074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.549252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.549277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.549462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.549506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.549682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.549725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.170 [2024-07-25 07:32:27.549889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.170 [2024-07-25 07:32:27.549932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.170 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.550128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.550154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.550305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.550332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.550480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.550525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.550682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.550724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.550903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.550950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.551098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.551123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.551325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.551370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.551575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.551618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.551799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.551841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.551996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.552022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.552175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.552201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.552409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.552453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.552642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.552668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.552840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.552882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.553031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.553055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.553214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.553239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.553429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.553472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.553647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.553690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.553858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.553901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.554050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.554074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.554222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.554252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.554400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.554446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.554647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.554688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.554868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.554912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.555071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.555096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.555266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.555309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.555454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.555498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.555681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.555710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.555925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.555968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.556088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.556117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.556316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.556359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.556500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.556544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.556689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.556716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.556915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.556939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.557090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.557115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.557291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.557316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.557471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.557613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.557638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.557788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.557814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.557967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.557992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.558141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.558166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.558341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.558383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.558525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.558567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.171 [2024-07-25 07:32:27.558716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.171 [2024-07-25 07:32:27.558759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.171 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.558915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.558940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.559095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.559120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.559289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.559317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.559502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.559543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.559708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.559751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.559869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.559894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.560045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.560070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.560258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.560284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.560430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.560473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.560646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.560688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.560857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.560899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.561052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.561076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.561206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.561231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.561387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.561415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.561578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.561619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.561794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.561836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.562023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.562049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.562227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.562257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.562434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.562476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.562680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.562722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.562893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.562934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.563077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.563102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.563228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.563260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.563411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.563454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.563610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.563652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.563858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.563905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.564082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.564108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.564262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.564287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.564464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.564508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.564713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.564756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.564930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.564972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.565156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.565181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.565353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.565398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.565566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.565609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.565755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.565799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.566001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.566043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.566196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.566220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.566377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.566421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.172 [2024-07-25 07:32:27.566625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.172 [2024-07-25 07:32:27.566668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.172 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.566842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.566889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.567016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.567043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.567223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.567254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.567401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.567443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.567592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.567635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.567782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.567825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.567975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.568017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.568199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.568224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.568372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.568414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.568559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.568600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.568773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.568815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.568988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.569031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.569210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.569235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.569427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.569469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.569672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.569714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.569891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.569932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.570062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.570087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.570238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.570279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.570450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.570491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.570673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.570717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.570893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.570935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.571089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.571114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.571267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.571293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.571469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.571512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.571683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.571725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.571916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.571960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.572112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.572141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.572292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.572320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.572515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.572558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.572762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.572789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.572960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.572984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.573145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.573170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.573310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.573353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.573506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.573549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.573727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.573769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.573971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.574000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.574166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.574191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.574366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.574409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.574554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.574596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.574778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.574826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.574989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.575015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.575168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.575195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.575373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.575415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.575569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.575611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.575786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.575828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.575957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.173 [2024-07-25 07:32:27.575983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.173 qpair failed and we were unable to recover it. 00:26:55.173 [2024-07-25 07:32:27.576163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.576188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.576363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.576404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.576590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.576616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.576791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.576832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.576955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.576980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.577133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.577158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.577364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.577408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.577622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.577664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.577865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.577894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.578045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.578070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.578221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.578256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.578439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.578480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.578664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.578707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.578879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.578926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.579083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.579109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.579285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.579314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.579516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.579542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.579715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.579756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.579932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.579974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.580157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.580183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.580305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.580335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.580489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.580515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.580690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.580731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.580902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.580945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.581071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.581096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.581338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.581368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.581563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.581592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.581765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.581791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.581941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.581965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.582115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.582140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.582264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.582290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.582454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.582495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.582673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.582718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.582873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.582914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.583076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.583101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.583255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.583280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.583489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.583516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.583733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.583761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.583953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.583978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.584112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.584139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.584313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.584358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.584502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.584529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.584730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.174 [2024-07-25 07:32:27.584773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.174 qpair failed and we were unable to recover it. 00:26:55.174 [2024-07-25 07:32:27.584895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.584920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.585065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.585089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.585275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.585300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.585470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.585512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.585725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.585767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.585959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.585984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.586159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.586184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.586365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.586409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.586588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.586631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.586814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.586855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.587006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.587030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.587188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.587213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.587371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.587414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.587617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.587660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.587841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.587886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.588043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.588067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.588251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.588277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.588454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.588501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.588702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.588745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.588913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.588938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.589088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.589112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.589238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.589268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.589445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.589488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.589693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.589720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.589915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.589958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.590135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.590160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.590301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.590329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.590514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.590557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.590697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.590742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.590922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.590964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.591111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.591137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.591317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.591359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.591534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.591783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.591825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.591978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.592003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.592186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.592212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.592420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.592448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.592646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.592689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.592843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.592886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.593062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.593087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.593245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.593271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.593479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.593522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.593666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.593708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.593852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.593894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.594050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.594076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.175 [2024-07-25 07:32:27.594266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.175 [2024-07-25 07:32:27.594293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.175 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.594467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.594511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.594650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.594696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.594874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.594916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.595040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.595067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.595223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.595254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.595437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.595462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.595664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.595707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.595889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.595933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.596055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.596080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.596234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.596263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.596399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.596425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.596595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.596642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.596814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.596857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.597011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.597036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.597162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.597187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.597365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.597408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.597614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.597658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.597828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.597870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.598025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.598052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.598232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.598352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.598535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.598578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.598757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.598803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.598931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.598958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.599109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.599134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.599343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.599387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.599549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.599592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.599797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.599840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.600019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.600044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.600208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.600233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.600411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.600453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.600625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.600669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.600821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.600864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.601038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.601063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.601217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.601248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.601427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.601471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.601674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.601717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.601859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.601902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.602067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.602092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.602269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.602308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.602442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.602468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.602689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-07-25 07:32:27.602729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.176 qpair failed and we were unable to recover it. 00:26:55.176 [2024-07-25 07:32:27.602961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.602988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.603126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.603155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.603360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.603387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.603532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.603562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.603720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.603748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.603897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.603926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.604133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.604189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.604378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.604406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.604590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.604633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.604805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.604847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.605051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.605094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.605317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.605344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.605554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.605596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.605764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.605806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.605981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.606023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.606201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.606227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.606376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.606401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.606550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.606593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.606796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.606839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.607009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.607053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.607201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.607227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.607400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.607442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.607594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.607637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.607793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.607837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.607994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.608038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.608194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.608220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.608408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.608450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.608629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.608673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.608874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.608917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.609081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.609108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.609235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.609265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.609444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.609472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.609712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.609754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.609955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.609984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.610178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.610204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.610367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.610410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.610591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.610636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.610806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.610855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.611022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.611048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.611224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.611256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.611427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.611469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.611659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.611685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.611897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.611941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.612121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.612147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.612321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.612363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.612540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.612582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.612757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.612799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.177 qpair failed and we were unable to recover it. 00:26:55.177 [2024-07-25 07:32:27.612947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.177 [2024-07-25 07:32:27.612990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.613141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.613167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.613348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.613392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.613539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.613582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.613788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.613831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.613984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.614009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.614166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.614191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.614366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.614395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.614579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.614621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.614801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.614828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.615022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.615046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.615202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.615227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.615394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.615430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.615658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.615687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.615881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.615909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.616081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.616106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.616262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.616288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.616440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.616468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.616643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.616672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.616869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.616897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.617033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.617058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.617214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.617239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.617439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.617467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.617605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.617633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.617894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.617945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.618111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.618136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.618313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.618342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.618539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.618568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.618764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.618791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.618954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.618979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.619110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.619139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.619295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.619338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.619538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.619566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.619725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.619766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.619885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.619910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.620063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.620088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.620247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.620289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.620466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.620507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.620669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.620698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.620850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.620876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.621037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.621062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.621217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.621246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.621461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.621489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.621655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.621682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.621847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.621875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.622045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.622069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.622252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.622295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.622428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.622471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.622644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.622670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.622822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.622847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.622974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.623000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.178 qpair failed and we were unable to recover it. 00:26:55.178 [2024-07-25 07:32:27.623157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.178 [2024-07-25 07:32:27.623183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.623355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.623384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.623579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.623606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.623804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.623858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.624055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.624079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.624215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.624245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.624427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.624455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.624635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.624662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.624849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.624877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.625048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.625073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.625192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.625216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.625378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.625406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.625590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.625617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.625865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.625893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.626069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.626094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.626277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.626303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.626467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.626493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.626727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.626754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.626946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.626970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.627121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.627151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.627318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.627347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.627617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.627668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.627906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.627935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.628105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.628129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.628266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.628309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.628472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.628500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.628663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.628688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.628844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.628870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.629006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.629030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.629210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.629235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.629418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.629446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.629684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.629712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.629852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.629879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.630083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.630109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.630266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.630291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.630474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.630503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.630711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.630739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.630947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.630975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.631202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.631230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.631382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.631407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.631538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.631562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.631688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.631713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.631862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.631887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.632038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.632063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.179 [2024-07-25 07:32:27.632215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.179 [2024-07-25 07:32:27.632240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.179 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.632400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.632424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.632612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.632636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.632791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.632817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.633000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.633025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.633202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.633229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.633371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.633395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.633524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.633549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.633675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.633699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.633827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.633851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.634002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.634027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.634200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.634224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.634415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.634440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.634595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.634620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.634798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.634823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.634976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.635004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.635185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.635213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.635362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.635388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.635560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.635584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.635711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.635736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.635903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.635930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.636150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.636178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.636365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.636389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.636511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.636535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.636655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.636679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.636840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.636864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.636988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.637013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.637162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.637186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.637327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.637352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.637510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.637535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.637703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.637728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.637859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.637884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.638061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.638086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.638267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.638309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.638438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.638462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.638612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.638637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.638786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.638810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.638934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.638960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.639143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.639168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.639302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.639327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.639488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.639512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.639634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.639659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.639784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.639810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.639961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.639986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.640184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.640212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.640384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.640409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.640586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.640610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.640785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.640812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.640992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.641017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.180 qpair failed and we were unable to recover it. 00:26:55.180 [2024-07-25 07:32:27.641218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.180 [2024-07-25 07:32:27.641254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.641459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.641500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.641709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.641737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.642019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.642058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.642261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.642304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.642428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.642454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.642612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.642640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.642815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.642841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.642967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.642991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.643145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.643169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.643335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.643362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.643511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.643536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.643717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.643741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.643894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.643918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.644097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.644125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.644325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.644351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.644473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.644499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.644674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.644698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.644842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.644868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.645018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.645044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.645179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.645205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.645345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.645371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.645549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.645574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.645749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.645774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.645932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.645958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.646132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.646160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.646359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.646384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.646540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.646565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.181 [2024-07-25 07:32:27.646718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.181 [2024-07-25 07:32:27.646742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.181 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.646899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.646923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.647080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.647105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.647261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.647444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.647469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.647662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.647686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.647845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.647870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.648027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.648052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.648233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.648269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.648447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.648471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.648600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.648625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.648761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.648907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.648933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.649062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.649087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.649248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.649273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.649401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.649425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.649557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-07-25 07:32:27.649583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-07-25 07:32:27.649739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.649764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.649915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.649944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.650099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.650125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.650275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.650301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.650423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.650448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.650603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.650628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.650809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.650834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.650986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.651130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.651274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.651424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.651601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.651755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.651931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.651956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.652112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.652138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.652323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.652348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.652497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.652522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.652643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.652669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.652826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.652851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.652997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.653022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.653199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.653224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.653388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.653413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.653570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.653594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.653774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.653799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.653961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.653986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.654147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.654185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.654366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.654392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.654581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.654606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.654757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.654784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.654913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.654937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.655061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.655085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.655207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.655233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.655397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.655422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.655575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.655599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.655757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.655784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.655942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.655967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.656161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-07-25 07:32:27.656188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-07-25 07:32:27.656394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.656419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.656576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.656600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.656757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.656783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.656929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.656953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.657102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.657132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.657293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.657319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.657477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.657502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.657658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.657683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.657831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.657856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.658037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.658062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.658248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.658276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.658453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.658477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.658598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.658623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.658753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.658780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.658942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.658966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.659147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.659173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.659306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.659332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.659481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.659505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.659685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.659709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.659863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.659888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.660070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.660095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.660312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.660338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.660488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.660512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.660665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.660690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.660874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.660899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.661044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.661069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.661217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.661246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.661382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.661407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.661523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.661547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.661707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.661732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.661909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.661934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.662110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.662138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.662311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.662336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.662513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.662537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.662666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.662692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.662849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.662873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-07-25 07:32:27.663027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-07-25 07:32:27.663052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.663198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.663222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.663413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.663438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.663569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.663595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.663742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.663767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.663920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.663945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.664120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.664145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.664267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.664293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.664426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.664455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.664633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.664661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.664797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.664825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.664990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.665019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.665185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.665214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.665415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.665444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.665614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.665701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.665897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.665925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.666054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.666082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.666295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.666320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.666496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.666521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.666676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.666701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.666836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.666862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.666992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.667017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.667169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.667194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.667383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.667410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.667555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.667580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.667711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.667736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.667868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.667894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.668049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.668074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.668215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.668248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.668400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.668442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.668588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.668613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.668772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.668799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.669030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.669058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.669311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.669336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.669488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.669513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.669681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-07-25 07:32:27.669715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-07-25 07:32:27.669894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.669939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.670095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.670121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.670288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.670317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.670508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.670552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.670728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.670773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.670956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.670998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.671150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.671176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.671339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.671382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.671551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.671593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.671793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.671821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.671991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.672016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.672140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.672167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.672336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.672385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.672561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.672604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.672742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.672784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.672940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.672965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.673115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.673141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.673327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.673371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.673523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.673565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.673741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.673782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.673934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.673959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.674108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.674134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.674312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.674338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.674456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.674481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.674658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.674683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.674832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.674857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.675023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.675048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.675227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.675259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.675392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.675419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.675575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.675601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.675748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.675791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.675951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.675976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.676134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.676161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.676318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.676362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.676564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.676607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.676839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.676885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.677063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-07-25 07:32:27.677089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-07-25 07:32:27.677284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.677309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.677495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.677537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.677733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.677775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.677980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.678009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.678184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.678213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.678371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.678398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.678542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.678570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.678706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.678734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.678949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.678976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.679108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.679136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.679300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.679327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.679451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.679476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.679603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.679643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.679834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.679862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.680020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.680047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.680183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.680212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.680406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.680432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.680579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.680604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.680774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.680802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.680976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.681003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.681175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.681199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.681357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.681383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.681534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.681561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.681892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.681939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.682115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.682143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.682331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.682357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-07-25 07:32:27.682510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-07-25 07:32:27.682534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.682670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.682709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.682905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.682933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.683115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.683147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.683352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.683378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.683553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.683580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.683740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.683769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.683911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.683938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.684113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.684141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.684311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.684337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.684490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.684531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.684788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.684816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.685053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.685080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.685254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.685298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.685429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.685455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.685715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.685763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.685926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.685953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.686129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.686158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.686330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.686356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.686505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.686546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.686850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.686905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.687082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.687110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.687240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.687290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.687411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.687436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.687589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.687614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.687777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.687805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.687976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.688003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.688169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.688193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.688346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.688372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.688544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.688571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.688762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.688794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-07-25 07:32:27.688969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-07-25 07:32:27.688997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.689169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.689196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.689346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.689372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.689503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.689529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.689702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.689729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.689938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.689967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2577052 Killed "${NVMF_APP[@]}" "$@" 00:26:55.472 [2024-07-25 07:32:27.690178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.690217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.690424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.690455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:55.472 [2024-07-25 07:32:27.690626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.690653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.472 [2024-07-25 07:32:27.690856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.690885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.472 [2024-07-25 07:32:27.691052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.691080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.472 [2024-07-25 07:32:27.691312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.691338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.691510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.691539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.691888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.691942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.692111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.692137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.692313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.692342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.692503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.692532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.692780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.692832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.693009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.693034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.693186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.693211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.693367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.693395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.693564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.693593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.693869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.694100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.694125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.694335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.694363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.694530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.694560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.694823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.694877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-07-25 07:32:27.695048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.695073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2577613 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:55.472 [2024-07-25 07:32:27.695254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.695297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2577613 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2577613 ']' 00:26:55.472 [2024-07-25 07:32:27.695503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.695540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.472 [2024-07-25 07:32:27.695780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-07-25 07:32:27.695810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.472 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.472 [2024-07-25 07:32:27.696017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 07:32:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.473 [2024-07-25 07:32:27.696044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.696196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.696251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.696462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.696507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.696694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.696737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.696914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.696958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.697137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.697164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.697359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.697405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.697607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.697650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.697799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.697843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.697978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.698005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.698160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.698185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.698363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.698406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.698583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.698628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.698809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.698852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.699014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.699045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.699167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.699193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.699374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.699417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.699598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.699642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.699823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.699870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.700020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.700045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.700177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.700203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.700427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.700469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.700672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.700703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.700895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.700925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.701124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.701151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.701326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.701358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.701572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.701615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.701862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.701915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.702122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.702147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.702329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.702359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.702530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.702560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.702719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.702747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.702914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.702940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.703102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.703128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.703304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.703333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-07-25 07:32:27.703470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-07-25 07:32:27.703498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.703710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.703758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.703888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.703914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.704071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.704098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.704219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.704251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.704433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.704462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.704703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.704742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.704929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.704957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.705137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.705162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.705362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.705391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.705598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.705658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.705829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.705859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.706006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.706033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.706189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.706214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.706422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.706452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.706712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.706740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.706937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.706962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.707091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.707117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.707248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.707291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.707489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.707523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.707727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.707755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.707925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.707950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.708125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.708355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.708384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.708641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.708692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.708849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.708875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.709004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.709029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.709184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.709209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.709419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.709448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.709618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.709646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.709868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.709896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.710089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.710115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.710272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.710298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.710456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.710691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.710719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.710928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.710953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.711127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-07-25 07:32:27.711153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-07-25 07:32:27.711352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.711381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.711558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.711587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.711867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.711918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.712090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.712116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.712293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.712322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.712492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.712533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.712701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.712729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.712901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.712927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.713108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.713134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-07-25 07:32:27.713273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-07-25 07:32:27.713332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.713551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.713578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.713744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.713771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.713898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.713924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.714087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.714113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.714320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.714350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.714555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.714581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.714713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.714739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.714874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.714899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.715080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.715106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.715230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.715264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.715442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.715470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.715663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.715694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.715870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.715896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.716058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.716084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.716209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.716234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.716417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.716446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.716734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.716783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.717031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.717059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.717232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.717264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.717425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.717451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.717582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.717607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.717771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.717796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.717949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.717974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.718177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.718206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.718382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.718408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.718539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.718565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.718727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.718753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.718939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.718964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.719139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.719167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.719370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-07-25 07:32:27.719395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-07-25 07:32:27.719547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.719573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.719706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.719733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.719861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.719889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.720065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.720091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.720262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.720290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.720445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.720472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.720606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.720633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.720793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.720819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.721002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.721027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.721198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.721232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.721386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.721412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.721596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.721621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.721773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.721798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.721923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.721949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.722099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.722307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.722336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.722496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.722647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.722672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.722820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.722846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.722973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.722999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.723142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.723172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.723352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.723378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.723537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.723562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.723723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.723748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.723893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.723919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.724071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.724096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.724275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.724301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.724451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.724477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.724629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.724655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.724807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.724833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.725007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.725032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.725205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.725233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.725411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.725436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.725614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.725640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.725812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.725841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.726043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.726070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.726352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-07-25 07:32:27.726378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-07-25 07:32:27.726530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.726555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.726679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.726704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.726864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.726889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.727040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.727066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.727249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.727274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.727452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.727478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.727609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.727634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.727763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.727788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.727940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.727966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.728145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.728171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.728327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.728354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.728509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.728535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.728686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.728716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.728893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.728918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.729098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.729123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.729279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.729305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.729432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.729458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.729609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.729634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.729819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.729845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.730000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.730025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.730205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.730234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.730444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.730470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.730596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.730621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.730774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.730799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.730948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.730973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.731085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.731109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.731294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-07-25 07:32:27.731320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-07-25 07:32:27.731446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.731471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.731630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.731656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.731808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.731833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.732014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.732040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.732192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.732220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.732393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.732418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.732572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.732597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.732771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.732799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.732985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.733013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.733206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.733234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.733373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.733401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.733539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.733567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.733865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.733913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.734094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.734119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.734302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.734328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.734511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.734536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.734712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.734737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.734890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.734916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.735046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.735072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.735255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.735281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.735405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.735430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.735577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.735602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.735756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.735782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.735931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.735956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.736099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.736126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.736293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.736323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.736478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.736504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.736630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.736656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.736820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.736846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.737003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.737028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.737206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.737231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.737391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.737416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.737562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.737587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.737717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.737743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.737924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.737949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.738128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.738156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-07-25 07:32:27.738304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-07-25 07:32:27.738331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.738514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.738539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.738699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.738725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.738892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.738917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.739075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.739258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.739410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.739590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.739763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.739911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.739919] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:26:55.480 [2024-07-25 07:32:27.739999] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.480 [2024-07-25 07:32:27.740070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.740095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.740223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.740255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.740441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.740485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.740626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.740651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.740776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.740801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.740960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.740985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.741108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.741135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.741289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.741318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.741574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.741602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.741829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.741857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.742026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.742051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.742177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.742202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.742386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.742415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.742661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.742689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.742879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.742938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.743137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.743163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.743286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.743329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.743505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.743534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.743767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.743795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.743965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.743990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.744117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.744142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.744294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.744323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.744498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.744524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.744766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.744818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.744993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.745018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-07-25 07:32:27.745165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-07-25 07:32:27.745190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.745338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.745368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.745592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.745620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.745763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.745806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.745975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.746001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.746158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.746183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.746358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.746392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.746628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.746656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.746830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.746855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.747008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.747033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.747215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.747245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.747400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.747428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.747564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.747593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.747856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.747908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.748105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.748130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.748303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.748331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.748557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.748585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.748772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.748799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.748969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.748994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.749146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.749171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.749378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.749407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.749700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.749730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.749942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.749972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.750136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.750164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.750361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.750387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.750540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.750566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.750746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.750771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.750896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.750923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.751079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.751105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-07-25 07:32:27.751261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-07-25 07:32:27.751287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.751465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.751491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.751646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.751671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.751829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.751854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.752010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.752035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.752158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.752183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.752362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.752388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.752541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.752567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.752714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.752739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.752894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.752919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.753064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.753090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.753252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.753278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.753400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.753425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.753581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.753608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.753765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.753790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.753940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.753966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.754137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.754165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.754339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.754369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.754516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.754541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.754673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.754698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.754846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.754871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.755050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.755076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.755226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.755261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.755443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.755468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.755626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.755652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.755769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.755793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.755942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.756173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.756201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.756369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.756394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.756574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.756600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.756752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.756780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.756976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.757004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.757213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.757247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.757450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.757479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.757639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-07-25 07:32:27.757667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-07-25 07:32:27.757832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.757860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.758087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.758114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.758317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.758343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.758502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.758527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.758709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.758734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.758861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.758886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.759034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.759059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.759183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.759208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.759370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.759396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.759570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.759605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.759749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.759777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.759948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.759991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.760181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.760207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.760388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.760433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.760587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.760631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.760848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.760879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.761050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.761078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.761227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.761258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.761440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.761469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.761639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.761667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.761834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.761862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.762052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.762080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.762252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.762304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.762491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.762532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.762712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.762742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.762949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.762977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.763147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.763175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.763321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.763347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.763499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.763540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.763707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.763965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.763994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.764160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.764188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.764369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.764395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.764576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.764601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.764745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.764774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-07-25 07:32:27.764941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-07-25 07:32:27.764969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.765166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.765194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.765376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.765402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.765553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.765578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.765739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.765766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.765900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.765926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.766076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.766102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.766283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.766309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.766439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.766464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.766608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.766636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.766828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.766856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.766994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.767023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.767216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.767250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.767384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.767409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.767590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.767619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.767750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.767778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.767951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.767979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.768145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.768174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.768342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.768368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.768521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.768547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.768747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.768776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.768965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.768993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.769227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.769260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.769442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.769467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.769636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.769664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.769828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.769856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.769995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.770024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.770212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.770251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.770429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.770454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.770628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.770656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.770827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.770855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.771082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.771111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.771279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.771322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.771475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.771500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.771664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.771690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.771860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.771889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.772057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-07-25 07:32:27.772088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-07-25 07:32:27.772270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.772297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.772431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.772458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.772653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.772682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.772824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.772853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.773032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.773060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.773220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.773257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.773409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.773434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.773590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.773617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.773789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.773817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.773989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.774017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.774178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.774206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.774361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.774387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.774518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.774543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.774697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.774722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.774929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.774958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.775125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.775167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.485 [2024-07-25 07:32:27.775351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.775377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.775553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.775581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.775771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.775992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.776020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.776168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.776193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.776346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.776372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.776537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.776565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.776738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.776767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.776987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.777015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.777216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.777250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.777421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.777448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.777627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.777653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.777828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.777856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.778022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.778050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.778200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.778229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.778375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.778400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.778577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.778603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.778774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-07-25 07:32:27.778906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-07-25 07:32:27.778932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.779057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.779082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.779208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.779233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.779363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.779389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.779535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.779560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.779701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.779740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.779932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.779959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.780116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.780142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.780292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.780319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.780449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.780475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.780622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.780649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.780824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.780850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.781011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.781037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.781160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.781186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.781338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.781364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.781493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.781526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.781646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.781672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.781851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.781876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.782038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.782063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.782212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.782254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.782379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.782406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.782564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.782590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.782743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.782769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.782920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.782945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.783106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.783132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.783290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.783316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.783472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.783497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.783653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.783679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.783830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.783855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.784036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.784062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.784212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.784239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.784381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.784408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-07-25 07:32:27.784556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-07-25 07:32:27.784581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.784754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.784780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.784958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.784983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.785107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.785132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.785267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.785298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.785445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.785471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.785660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.785685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.785817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.785842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.786017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.786043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.786198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.786224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.786355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.786381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.786537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.786564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.786711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.786736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.786892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.786917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.787064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.787089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.787239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.787270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.787425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.787450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.787588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.787615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.787804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.787829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.787980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.788005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.788126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.788151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.788339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.788365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.788496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.788521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.788671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.788696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.788859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.788884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.789008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.789033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.789181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.789206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.789383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.789409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.789560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.789585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.789764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.789790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.789943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.789968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.790156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.790181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.790343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.790369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.790499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.790524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.790670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.790695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.790846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-07-25 07:32:27.790872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-07-25 07:32:27.791027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.791053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.791240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.791271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.791431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.791456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.791583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.791609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.791766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.791793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.791960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.791987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.792167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.792354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.792380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.792510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.792540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.792700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.792725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.792909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.792935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.793093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.793120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.793290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.793317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.793460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.793500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.793682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.793707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.793837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.793863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.794008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.794033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.794256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.794282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.794419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.794446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.794604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.794631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.794821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.794846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.794998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.795023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.795214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.795240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.795398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.795423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.795600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.795625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.795750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.795775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.795921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.795947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.796098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.796123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.796280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.796306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.796457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.796483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.796670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.796695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.796849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.796874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.797048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.797073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.797239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.797269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.797425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.797451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.797616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.797655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-07-25 07:32:27.797821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-07-25 07:32:27.797848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.798005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.798031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.798199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.798224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.798358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.798383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.798527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.798552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.798713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.798738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.798887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.798912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.799068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.799093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.799251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.799277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.799429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.799454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.799607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.799633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.799815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.799840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.800018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.800043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.800179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.800204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.800331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.800357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.800508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.800533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.800661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.800686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.800837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.800862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.801034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.801059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.801212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.801237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.801368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.801393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.801513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.801538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.801694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.801721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.801881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.801906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.802083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.802108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.802294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.802320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.802451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.802481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.802636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.802661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.802782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.802807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.802984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.803009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.803130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.803155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.803286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.803312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.803459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.803484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.803631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.803656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.803814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.803839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.803998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.804023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.804200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.804225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-07-25 07:32:27.804408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-07-25 07:32:27.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.804612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.804637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.804785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.804810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.804997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.805022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.805177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.805202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.805369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.805395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.805524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.805549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.805701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.805726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.805850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.805875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.806049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.806194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.806370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.806516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.806655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.806834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.806986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.807011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.807154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.807183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.807341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.807367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.807528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.807552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.807700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.807725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.807850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.807874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.808025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.808049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.808185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.808225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.808395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.808423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.808584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.808610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.808762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.808788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.808942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.808968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.809121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.809146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.809300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.809326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.809480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.809506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.809672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-07-25 07:32:27.809699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-07-25 07:32:27.809855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.809880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.810029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.810055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.810205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.810231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.810393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.810418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.810567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.491 [2024-07-25 07:32:27.810574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.810601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.810754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.810781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.810938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.810966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.811094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.811119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.811274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.811301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.811481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.811506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.811722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.811747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.811870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.811896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.812123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.812148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.812305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.812330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.812462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.812487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.812636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.812660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.812809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.812834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.812970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.812995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.813143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.813169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.813324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.813351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.813506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.813533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.813715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.813740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.813872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.813898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.814047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.814073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.814260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.814286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.814411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.814442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.814621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.814646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.814774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.814799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.814925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.814951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.815111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.815135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.815289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.815315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.815435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.815460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.815613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.815638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.815788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.815813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.815937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.815961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.816116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.816141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.816320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.491 [2024-07-25 07:32:27.816347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.491 qpair failed and we were unable to recover it. 00:26:55.491 [2024-07-25 07:32:27.816514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.816539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.816697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.816722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.816851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.816876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.817028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.817054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.817213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.817238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.817393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.817418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.817573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.817598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.817756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.817782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.817961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.817985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.818143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.818168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.818377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.818403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.818583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.818609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.818741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.818766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.818921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.818948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.819112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.819137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.819261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.819286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.819485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.819509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.819688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.819713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.819832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.819857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.819986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.820011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.820138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.820163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.820325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.820350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.820466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.820491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.820619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.820644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.820794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.820819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.820977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.821002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.821132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.821157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.821287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.821313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.821433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.821458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.821641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.821680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.821852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.821881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.492 [2024-07-25 07:32:27.822067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.492 [2024-07-25 07:32:27.822094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.492 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.822280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.822308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.822463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.822490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.822644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.822670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.822836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.822862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.823017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.823044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.823167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.823193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.823372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.823411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.823571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.823597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.823778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.823803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.823953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.823978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.824169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.824194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.824368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.824394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.824522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.824547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.824696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.824721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.824877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.824903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.825060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.825088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.825250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.825278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.825409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.825438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.825565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.825593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.825728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.825755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.825904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.825930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.826057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.826083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.826262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.826288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.826437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.826462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.826623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.826651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.826809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.826836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.827045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.827223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.827384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.827558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.827698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.827859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.827986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.828012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.828177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.828217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.493 [2024-07-25 07:32:27.828411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.493 [2024-07-25 07:32:27.828451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.493 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.828642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.828670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.828856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.828883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.829039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.829073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.829260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.829288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.829453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.829479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.829630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.829656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.829839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.829865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.829991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.830018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.830172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.830197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.830355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.830381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.830543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.830570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.830765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.830793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.830952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.830978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.831133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.831159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.831291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.831317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.831476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.831501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.831663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.831688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.831814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.831842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.832027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.832053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.832207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.832233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.832394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.832420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.832548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.832573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.832709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.832735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.832885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.832911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.833059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.833084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.833262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.833299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.833457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.833483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.833629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.833656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.833782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.833810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.833995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.834026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.834180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.834207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.834346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.834373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.834526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.834554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.834675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.834700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.834829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.834855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.494 [2024-07-25 07:32:27.835009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.494 [2024-07-25 07:32:27.835034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.494 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.835214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.835240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.835406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.835432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.835554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.835579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.835734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.835759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.835907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.835932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.836085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.836110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.836232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.836276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.836464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.836489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.836611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.836636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.836754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.836779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.836907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.836932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.837112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.837137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.837288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.837314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.837464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.837489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.837634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.837659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.837783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.837808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.837958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.837983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.838131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.838156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.838289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.838315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.838467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.838491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.838668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.838702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.838852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.838878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.839035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.839060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.839240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.839270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.839420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.839445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.839569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.839594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.839751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.839777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.839898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.839923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.840082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.840107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.840288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.840314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.840467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.495 [2024-07-25 07:32:27.840503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.495 qpair failed and we were unable to recover it. 00:26:55.495 [2024-07-25 07:32:27.840659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.840683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.840843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.840869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.841051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.841076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.841204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.841229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.841371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.841410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.841597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.841625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.841782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.841808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.841962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.841988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.842113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.842141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.842304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.842331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.842463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.842489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.842645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.842672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.842835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.842860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.843001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.843027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.843180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.843207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.843372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.843398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.843559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.843591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.843762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.843787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.843968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.843994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.844160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.844185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.844339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.844365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.844500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.844527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.844650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.844675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.844863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.844888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.845042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.845067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.845192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.845219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.845382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.845408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.845541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.845567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.845688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.845712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.845860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.845885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.846013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.846038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.846183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.846209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.846365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.846391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.846518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.846545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.846668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.846693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.846840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.846866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.847025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.496 [2024-07-25 07:32:27.847050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.496 qpair failed and we were unable to recover it. 00:26:55.496 [2024-07-25 07:32:27.847175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.847202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.847389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.847417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.847548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.847573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.847762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.847787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.847921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.847947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.848133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.848159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.848314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.848345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.848530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.848556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.848689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.848715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.848864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.848889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.849036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.849061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.849217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.849248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.849400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.849426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.849577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.849603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.849726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.849751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.849908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.849932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.850113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.850138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.850263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.850289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.850437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.850463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.850612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.850637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.850770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.850795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.850947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.850972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.851150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.851175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.851328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.851354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.851479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.851504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.851626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.851651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.851780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.851805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.851937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.851962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.852124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.852149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.852276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.852302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.852479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.852504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.852653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.852678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.852857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.852883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.853074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.853103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.853225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.853255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.853409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.853435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.853586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.853611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.853736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.853761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.853917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.853942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.497 [2024-07-25 07:32:27.854094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.497 [2024-07-25 07:32:27.854120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.497 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.854254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.854280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.854435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.854460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.854609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.854635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.854786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.854811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.854974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.854999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.855134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.855174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.855313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.855342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.855533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.855561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.855714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.855741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.855929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.855955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.856108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.856134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.856318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.856345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.856501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.856527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.856674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.856700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.856867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.856893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.857039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.857064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.857221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.857263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.857420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.857447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.857568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.857595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.857757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.857783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.857937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.857969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.858126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.858152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.858278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.858303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.858457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.858482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.858631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.858657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.858810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.858834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.858982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.859008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.859163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.859188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.859325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.859351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.859503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.859530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.859677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.859703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.859828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.859855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.859982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.860008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.860130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.860155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.860317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.860343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.860499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.860524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.860648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.860673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.860823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.860848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.860996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.498 [2024-07-25 07:32:27.861022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.498 qpair failed and we were unable to recover it. 00:26:55.498 [2024-07-25 07:32:27.861167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.861192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.861316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.861341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.861464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.861490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.861664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.861688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.861813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.861838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.862019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.862045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.862201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.862227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.862385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.862410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.862569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.862614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.862775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.862803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.862971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.862999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.863132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.863160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.863316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.863343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.863476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.863502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.863679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.863705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.863838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.863865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.864022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.864047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.864176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.864203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.864374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.864401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.864560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.864586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.864761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.864787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.864936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.864961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.865099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.865124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.865278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.865304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.865459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.865485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.865616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.865641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.865785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.865810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.865965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.865990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.866143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.866168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.866296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.866322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.866495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.866521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.866678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.866703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.866855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.866880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.867029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.867054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.867207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.867232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.867362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.867392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.867549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.867576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.867706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.867732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.867911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.867937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.499 [2024-07-25 07:32:27.868062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.499 [2024-07-25 07:32:27.868089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.499 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.868229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.868270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.868460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.868486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.868642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.868667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.868822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.868847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.868969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.868994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.869114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.869139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.869320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.869347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.869479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.869505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.869682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.869707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.869860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.869885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.870018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.870043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.870202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.870227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.870393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.870419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.870542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.870568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.870724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.870749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.870883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.870909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.871060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.871085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.871205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.871230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.871420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.871445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.871568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.871593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.871770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.871795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.871946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.871971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.872115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.872155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.872328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.872357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.872499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.872525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.872680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.872705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.872857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.872882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.873012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.873040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.873196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.873224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.873357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.873383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.873514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.873539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.873660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.873686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.873839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.873864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.874019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.874044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.874211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.874258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.874423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.874451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.500 qpair failed and we were unable to recover it. 00:26:55.500 [2024-07-25 07:32:27.874626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.500 [2024-07-25 07:32:27.874654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.874813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.874839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.874969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.874994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.875147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.875172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.875355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.875382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.875532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.875557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.875717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.875742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.875900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.875925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.876088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.876112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.876239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.876269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.876395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.876420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.876603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.876628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.876752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.876777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.876932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.876963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.877088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.877113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.877266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.877291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.877414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.877439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.877614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.877639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.877795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.877820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.877953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.877978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.878128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.878153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.878301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.878327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.878457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.878482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.878628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.878653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.878798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.878824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.879005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.879029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.879180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.879205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.879363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.879390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.879569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.879594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.879771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.879796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.879975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.880001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.880153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.880177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.880308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.880334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.880515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.880540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.880699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.880725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.880909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.880934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.881093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.881120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.881308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.881334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.881464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.881489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.881645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.881670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.881794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.881819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.881969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.881995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.882163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.882187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.501 [2024-07-25 07:32:27.882338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.501 [2024-07-25 07:32:27.882364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.501 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.882522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.882548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.882708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.882733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.882885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.882912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.883059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.883084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.883212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.883238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.883423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.883448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.883605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.883631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.883759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.883784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.883914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.883939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.884062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.884087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.884262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.884299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.884474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.884501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.884656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.884683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.884843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.884870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.885028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.885054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.885206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.885231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.885371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.885398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.885586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.885611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.885759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.885784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.885909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.885936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.886087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.886111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.886276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.886303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.886433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.886458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.886607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.886631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.886810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.886835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.886964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.886993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.887158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.887184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.887340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.887367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.887523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.887550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.887707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.887733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.887886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.887913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.888065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.888091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.888265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.888292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.889551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.889583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.889748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.889776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.889967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.889994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.890173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.890200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.890349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.890376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.890561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.890588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.890720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.890746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.890902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.890928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.891090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.891118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.891286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.891315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.502 qpair failed and we were unable to recover it. 00:26:55.502 [2024-07-25 07:32:27.891500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.502 [2024-07-25 07:32:27.891526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.892542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.892580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.892821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.892848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.893008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.893035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.893188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.893215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.893370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.893396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.893551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.893577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.893735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.893767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.893928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.893971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.894140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.894166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.894327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.894354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.894511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.894554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.894725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.894752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.895691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.895731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.895977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.896004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.896128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.896154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.896306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.896334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.896487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.896513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.896647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.896674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.896837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.896863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.896991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.897018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.897150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.897179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.897316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.897343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.897502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.897528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.897707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.897733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.897885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.897911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.898065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.898098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.898227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.898270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.898406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.898433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.898595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.898621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.898744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.898770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.898951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.898978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.899131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.899157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.899310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.899337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.899522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.899556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.899727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.899753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.899950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.899976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.900133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.900159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.900315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.900342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.900501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.900536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.900690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.900716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.900847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.900874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.901031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.901057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.901189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.901215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.901351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.901378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.901506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.901549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.503 qpair failed and we were unable to recover it. 00:26:55.503 [2024-07-25 07:32:27.901710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.503 [2024-07-25 07:32:27.901736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.901921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.901953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.902091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.902117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.902245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.902274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.902406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.902433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.902582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.902609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.902738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.902766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.902891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.902918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.903047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.903073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.903221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.903270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.903431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.903457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.903612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.903640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.903796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.903824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.904550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.904578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.904848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.904875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.905005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.905031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.905706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.905734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.905939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.905967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.906145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.906171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.906332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.906359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.906511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.906539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.906693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.906731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.906880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.906906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.907060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.907096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.907252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.907279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.907458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.907484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.907668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.907693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.907829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.907855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.908015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.908041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.908187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.908213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.908347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.908374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.908496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.908522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.908675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.908701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.908854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.908891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.909040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.909066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.909842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.909870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.910072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.910099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.910257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.910284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.910968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.911010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.911273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.504 [2024-07-25 07:32:27.911302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.504 qpair failed and we were unable to recover it. 00:26:55.504 [2024-07-25 07:32:27.911443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.911470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.911612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.911642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.911780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.911807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.911958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.911984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.912115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.912140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.912288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.912314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.912497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.912523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.912636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.912662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.912794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.912822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.912979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.913004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.913160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.913186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.913327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.913353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.913505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.913541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.913694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.913720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.913874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.913900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.914032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.914058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.914240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.914274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.914400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.914426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.914573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.914599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.914763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.914789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.914960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.914985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.915112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.915138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.915298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.915325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.915472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.915499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.915660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.915686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.915813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.915839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.915960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.915986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.916121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.916147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.916303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.916344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.916504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.916535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.916671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.916697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.916874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.916899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.917021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.917047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.917175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.917201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.917339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.917365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.917513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.917548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.917704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.917730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.917884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.917911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.918039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.918064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.918214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.918247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.918407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.918433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.918569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.918599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.918757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.918782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.918904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.918931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.919078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.919104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.505 qpair failed and we were unable to recover it. 00:26:55.505 [2024-07-25 07:32:27.919235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-25 07:32:27.919268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.919397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.919423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.919563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.919615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.919752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.919779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.919898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.919923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.920100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.920125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.920261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.920287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.920425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.920464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.920663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.920690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.920816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.920842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.921005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.921032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.921184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.921222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.921369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.921395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.921520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.921546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.921694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.921720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.921881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.921907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.922058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.922084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.922221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.922275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.922413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.922441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.922601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.922627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.922780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.922805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.922934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.922959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.923077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.923102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.923231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.923269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.923429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.923454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.923622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.923648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.923797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.923822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.924006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.924031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.924735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.924776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.924991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.925017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.925151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.925177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.925343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.925370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.925502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.925527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.925658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.925683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.925813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.925838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.926015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.926040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.926187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.926212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.926353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.926378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.926507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.926533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.926718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.926744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.927399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.927428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.927556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.927581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.927954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.506 [2024-07-25 07:32:27.927987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.506 [2024-07-25 07:32:27.928002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.506 [2024-07-25 07:32:27.928015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.506 [2024-07-25 07:32:27.928026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.506 [2024-07-25 07:32:27.928207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.928259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.506 qpair failed and we were unable to recover it. 00:26:55.506 [2024-07-25 07:32:27.928346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:26:55.506 [2024-07-25 07:32:27.928410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-25 07:32:27.928435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.928398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:26:55.507 [2024-07-25 07:32:27.928447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:26:55.507 [2024-07-25 07:32:27.928450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.507 [2024-07-25 07:32:27.929273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.929303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.929447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.929473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.929618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.929643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.929837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.929877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.930014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.930041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.930177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.930203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.930345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.930372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.930502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.930528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.930677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.930703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.930828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.930854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.931021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.931053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.931223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.931254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.931377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.931403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.931527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.931552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.931714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.931739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.931865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.931890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.932024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.932064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.932189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.932214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.932348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.932375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.932501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.932527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.932677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.932703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.932828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.932854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.933005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.933037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.933187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.933213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.933348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.933374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.933495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.933520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.933644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.933679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.933838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.933863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.934018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.934043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.934282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.934308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.934437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.934464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.934591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.934616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.934749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.934774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.934921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.934946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.935107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.935131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.935264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.935291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.935423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.935448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.935610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.935635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.935774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.935800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.935981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.936006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.936146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.936171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.936335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.936362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.936490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.936515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.936735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.936778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.936919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-25 07:32:27.936947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.507 qpair failed and we were unable to recover it. 00:26:55.507 [2024-07-25 07:32:27.937076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.937105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.937273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.937301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.937435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.937461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.937657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.937683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.937837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.937863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.938061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.938214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.938385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.938550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.938704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.938860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.938997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.939029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.939164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.939189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.939328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.939354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.939480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.939506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.939634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.939795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.939975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.940002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.940152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.940179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.940324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.940351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.940485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.940511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.940716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.940742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.940951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.940977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.941116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.941143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.941335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.941377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.941537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.941582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.941710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.941738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.941888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.941914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.942059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.942084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.942252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.942279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.942411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.942437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.942564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.942591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.942749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.942775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.942918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.942944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.943077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.943103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.943230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.943270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.943396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.943421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.943537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.943562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.943721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.943751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.943871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.943897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.944041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.944080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.944222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.944258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.508 [2024-07-25 07:32:27.944388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-25 07:32:27.944414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.508 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.944542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.944567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.944695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.944720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.944846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.944871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.945017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.945043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.945196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.945221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.945427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.945456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.945609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.945635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.945787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.945812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.945932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.945957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.946084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.946110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.946248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.946274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.946415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.946441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.946600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.946627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.946763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.946797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.946950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.946975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.947187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.947214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.947388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.947428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.947561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.947588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.947727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.947752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.947878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.947903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.948061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.948086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.948205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.948230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.948379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.948408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.948540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.948570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.948738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.948765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.948915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.948941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.949073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.949099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.949224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.949264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.949384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.949409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.949535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.949560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.949714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.949740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.949895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.949920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.950058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.950085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.950224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.950256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.950408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.950434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.950559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.950589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.950711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.950736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.950870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.950898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.951078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.951104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.951263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.951289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.951408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.951433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.951558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.951584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.951735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.951760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.951914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.951939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.952126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.952151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.952290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.952317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.952436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.952461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.952618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.952644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.509 [2024-07-25 07:32:27.952780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.509 [2024-07-25 07:32:27.952805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.509 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.952931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.952958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.953142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.953168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.953319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.953346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.953524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.953555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.953787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.953813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.953959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.953985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.954114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.954140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.954320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.954346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.954484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.954511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.954670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.954706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.954859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.954884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.955010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.955035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.955217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.955264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.955398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.955428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.955560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.955585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.955716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.955741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.955895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.955920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.956055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.956080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.956239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.956272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.956398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.956422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.956607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.956632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.956753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.956778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.956903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.956929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.957049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.957074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.957234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.957278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.957409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.957436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.957586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.957612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.957748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.957785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.957912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.957937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.958069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.958093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.958212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.958238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.958386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.958411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.958533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.958559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.958683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.958708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.958868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.958894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.959051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.959077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.959224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.959265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.959421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.959448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.959574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.959601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.959812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.959837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.960964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.960991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.961146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.961171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.961315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.961342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.961461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.961487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.961611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.961648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.510 qpair failed and we were unable to recover it. 00:26:55.510 [2024-07-25 07:32:27.961815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.510 [2024-07-25 07:32:27.961839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.961976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.962002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.962139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.962164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.962321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.962347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.962477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.962502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.962650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.962675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.962848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.962873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.963002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.963028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.963181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.963206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.963353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.963380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.963528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.963554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.963684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.963711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.963862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.963887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.964954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.964979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.965108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.965133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.965267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.965293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.965473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.965498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.965636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.965663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.965793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.965819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.965970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.966000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.966125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.966150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.966309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.966335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.966465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.966489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.966619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.966644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.966806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.966846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.966998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.967025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.967167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.967194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.967344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.967371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.967500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.967526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.967694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.967729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.967866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.967892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.968035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.968061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.511 [2024-07-25 07:32:27.968197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.511 [2024-07-25 07:32:27.968221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.511 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.968350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.968377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.968508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.968539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.968707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.968732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.968850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.968877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.969029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.969054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.969221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.969262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.969415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.969441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.969565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.969591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.969725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.969751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.969894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.969922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.970065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.970092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.970229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.773 [2024-07-25 07:32:27.970261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.773 qpair failed and we were unable to recover it. 00:26:55.773 [2024-07-25 07:32:27.970388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.970415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.970576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.970614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.970751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.970775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.970893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.970919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.971058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.971084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.971253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.971279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.971420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.971445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.971569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.971594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.971750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.971777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.971905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.971931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.972085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.972110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.972237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.972280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.972397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.972424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.972547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.972572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.972761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.972786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.972909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.972945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.973081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.973106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.973234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.973279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.973428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.973454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.973573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.973612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.973760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.973786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.973910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.973934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.974067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.974093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.974266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.974292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.974415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.974439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.974561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.974587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.974740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.974765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.974895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.974920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.975081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.975106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.975264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.975291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.975412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.975437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.975566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.975591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.975725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.975749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.975883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.975908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.976085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.976118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.976248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.976275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.774 qpair failed and we were unable to recover it. 00:26:55.774 [2024-07-25 07:32:27.976428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.774 [2024-07-25 07:32:27.976453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.976591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.976617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.976754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.976784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.976923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.976947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.977085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.977111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.977308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.977335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.977489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.977515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.977656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.977681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.977803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.977828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.977976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.978000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.978156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.978182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.978346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.978372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.978568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.978592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.978747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.978772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.978930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.978956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.979117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.979142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.979268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.979293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.979418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.979443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.979592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.979618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.979774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.979799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.979936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.979961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.980108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.980133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.980272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.980299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.980438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.980467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.980599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.980624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.980782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.980812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.980961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.980993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.981126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.981152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.981394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.981421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.981552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.981577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.981723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.981749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.981939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.981965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.982103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.982135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.982261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.982286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.982409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.775 [2024-07-25 07:32:27.982434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.775 qpair failed and we were unable to recover it. 00:26:55.775 [2024-07-25 07:32:27.982617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.982642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.982762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.982786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.982943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.982968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.983104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.983131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.983264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.983289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.983425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.983449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.983613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.983640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.983774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.983800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.983924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.983950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.984132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.984158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.984287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.984313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.984456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.984481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.984622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.984647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.984807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.984831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.984959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.984985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.985111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.985142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.985305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.985330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.985453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.985478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.985615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.985639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.985756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.985781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.985946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.985971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.986097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.986121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.986247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.986273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.986399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.986425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.986582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.986618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.986766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.986791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.986923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.986947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.987099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.987124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.987275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.987306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.987436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.987461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.987576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.987600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.987756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.987783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.987931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.987957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.988080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.988105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.776 [2024-07-25 07:32:27.988272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.776 [2024-07-25 07:32:27.988299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.776 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.988438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.988464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.988639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.988665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.988815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.988851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.989003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.989028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.989150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.989175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.989319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.989346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.989495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.989520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.989698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.989723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.989873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.989898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.990030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.990055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.990177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.990203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.990379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.990405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.990535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.990563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.990750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.990774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.990898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.990924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.991083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.991109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.991264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.991290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.991413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.991439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.991562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.991587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.991716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.991742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.991903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.991940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.992129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.992154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.992287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.992313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.992459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.992485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.992605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.992630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.992788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.992813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.992936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.992962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.993115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.993141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.993267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.993292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.993413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.993439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.993592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.993622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.993749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.993775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.993926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.993951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.994088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.994118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.994278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.777 [2024-07-25 07:32:27.994305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.777 qpair failed and we were unable to recover it. 00:26:55.777 [2024-07-25 07:32:27.994431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.994456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.994618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.994643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.994768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.994793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.994937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.994962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.995145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.995170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.995299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.995325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.995461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.995487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.995624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.995648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.995806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.995831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.995955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.995982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.996134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.996159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.996315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.996342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.996469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.996495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.996628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.996653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.996824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.996849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.997002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.997027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.997163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.997189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.997353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.997379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.997495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.997520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.997651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.997677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.997880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.997905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.998056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.998081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.998236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.998267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.998412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.998437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.998573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.998598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.998736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.998761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.998909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.998934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.999087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.778 [2024-07-25 07:32:27.999112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.778 qpair failed and we were unable to recover it. 00:26:55.778 [2024-07-25 07:32:27.999279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:27.999305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:27.999443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:27.999473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:27.999619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:27.999644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:27.999807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:27.999832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:27.999969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:27.999994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.000159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.000185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.000348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.000374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.000503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.000529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.000692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.000717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.000880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.000905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.001034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.001064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.001230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.001263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.001420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.001445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.001570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.001595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.001768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.001793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.001921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.001946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.002105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.002129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.002265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.002291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.002428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.002454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.002601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.002627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.002751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.002778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.002954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.002979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.003100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.003126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.003255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.003281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.003489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.003515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.003637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.003662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.003840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.003866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.003987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.004140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.004302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.004460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.004623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.004801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.004962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.004987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.005154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.005180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.005328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.005354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.779 [2024-07-25 07:32:28.005515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.779 [2024-07-25 07:32:28.005540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.779 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.005704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.005731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.005880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.005905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.006029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.006054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.006213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.006238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.006370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.006396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.006560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.006585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.006700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.006726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.006845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.006870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.007027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.007052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.007170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.007196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.007362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.007388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.007516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.007546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.007677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.007702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.007853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.007882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.008018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.008042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.008174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.008200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.008390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.008417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.008551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.008576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.008694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.008719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.008850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.008876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.009029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.009054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.009173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.009198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.009340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.009366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.009521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.009693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.009717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.009840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.009866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.010023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.010048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.010171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.010196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.010392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.010434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.010612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.010639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.010781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.010807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.010956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.010982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.011103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.011128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.011263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.780 [2024-07-25 07:32:28.011290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.780 qpair failed and we were unable to recover it. 00:26:55.780 [2024-07-25 07:32:28.011443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.011468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.011637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.011662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.011784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.011809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.011961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.011988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.012124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.012150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.012283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.012331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.012479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.012508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.012640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.012667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.012819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.012844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.012970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.012997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.013127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.013153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.013283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.013309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.013466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.013492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.013626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.013650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.013797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.013822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.013991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.014016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.014169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.014195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 A controller has encountered a failure and is being reset. 00:26:55.781 [2024-07-25 07:32:28.014370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.014411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.014598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.014626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.014780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.014819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.014944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.014971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.015098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.015123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.015280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.015307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.015433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.015459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.015582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.015608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.015741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.015766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.015890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.015916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.016096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.016122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.016252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.016278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.016401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.016426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.016546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.016572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.016702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.016727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.016882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.016908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.017068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.017094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.017234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.017267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.781 qpair failed and we were unable to recover it. 00:26:55.781 [2024-07-25 07:32:28.017425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.781 [2024-07-25 07:32:28.017452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.017608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.017635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.017761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.017787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.017939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.017965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.018087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.018114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.018276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.018303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.018422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.018448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.018570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.018596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.018744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.018770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.018926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.018952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.019099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.019125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d2c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.019276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.019316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d3c000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.019455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.019494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c29250 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.019643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.019672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.019823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.019849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.019970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.019995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.020171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.020197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.020340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.020366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.020482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.020507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.020678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.020704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.020825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.020850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.020994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.021019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.021199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.021224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.021362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.021389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.021546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.021575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.021761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.021787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3d34000b90 with addr=10.0.0.2, port=4420 00:26:55.782 qpair failed and we were unable to recover it. 00:26:55.782 [2024-07-25 07:32:28.021976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.782 [2024-07-25 07:32:28.022023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c37230 with addr=10.0.0.2, port=4420 00:26:55.782 [2024-07-25 07:32:28.022043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37230 is same with the state(5) to be set 00:26:55.782 [2024-07-25 07:32:28.022070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37230 (9): Bad file descriptor 00:26:55.782 [2024-07-25 07:32:28.022089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.782 [2024-07-25 07:32:28.022113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.782 [2024-07-25 07:32:28.022131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.782 Unable to reset the controller. 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.347 Malloc0 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.347 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.347 [2024-07-25 07:32:28.732793] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.348 [2024-07-25 07:32:28.761076] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.348 07:32:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2577204 00:26:56.605 Controller properly reset. 00:27:01.866 Initializing NVMe Controllers 00:27:01.866 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:01.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:01.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:01.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:01.866 Initialization complete. Launching workers. 00:27:01.866 Starting thread on core 1 00:27:01.866 Starting thread on core 2 00:27:01.866 Starting thread on core 3 00:27:01.866 Starting thread on core 0 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:01.866 00:27:01.866 real 0m11.349s 00:27:01.866 user 0m35.688s 00:27:01.866 sys 0m8.016s 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.866 ************************************ 00:27:01.866 END TEST nvmf_target_disconnect_tc2 00:27:01.866 ************************************ 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.866 07:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.866 rmmod nvme_tcp 00:27:01.866 rmmod nvme_fabrics 00:27:01.866 rmmod nvme_keyring 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2577613 ']' 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2577613 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2577613 ']' 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2577613 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2577613 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2577613' 00:27:01.866 killing process with pid 2577613 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2577613 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2577613 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.866 07:32:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.395 07:32:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.395 00:27:04.395 real 0m16.192s 00:27:04.395 user 1m0.961s 00:27:04.395 sys 0m10.515s 00:27:04.395 07:32:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.395 07:32:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:04.395 ************************************ 00:27:04.395 END TEST nvmf_target_disconnect 00:27:04.395 ************************************ 00:27:04.395 07:32:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:04.395 00:27:04.395 real 5m2.580s 00:27:04.395 user 10m59.249s 00:27:04.395 sys 1m13.443s 00:27:04.395 07:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.395 07:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.395 ************************************ 00:27:04.395 END TEST nvmf_host 00:27:04.395 ************************************ 00:27:04.395 00:27:04.395 real 19m31.073s 00:27:04.395 user 46m25.243s 00:27:04.395 sys 4m55.090s 00:27:04.395 07:32:36 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.395 07:32:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.395 ************************************ 00:27:04.395 END TEST nvmf_tcp 00:27:04.395 ************************************ 00:27:04.395 07:32:36 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:27:04.395 07:32:36 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:04.395 07:32:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:04.395 07:32:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:04.395 07:32:36 -- common/autotest_common.sh@10 -- # set +x 00:27:04.395 ************************************ 00:27:04.395 START TEST spdkcli_nvmf_tcp 00:27:04.395 ************************************ 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:04.395 * Looking for test storage... 00:27:04.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2578812 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2578812 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2578812 ']' 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.395 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.396 [2024-07-25 07:32:36.593252] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:04.396 [2024-07-25 07:32:36.593349] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578812 ] 00:27:04.396 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.396 [2024-07-25 07:32:36.651039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:04.396 [2024-07-25 07:32:36.759821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.396 [2024-07-25 07:32:36.759827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.396 07:32:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:04.396 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:04.396 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:04.396 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:04.396 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:04.396 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:04.396 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:04.396 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.396 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.396 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:04.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:04.396 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:04.396 ' 00:27:06.923 [2024-07-25 07:32:39.425876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.306 [2024-07-25 07:32:40.650286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:10.832 [2024-07-25 07:32:42.949416] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:12.725 [2024-07-25 07:32:44.891565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:14.097 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:14.097 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:14.097 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:14.097 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:14.097 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:14.097 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:14.097 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:14.097 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.097 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.097 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:14.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:14.097 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:14.097 07:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.663 07:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:14.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:14.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:14.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:14.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:14.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:14.663 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:14.663 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:14.663 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:14.663 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:14.663 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:14.663 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:14.663 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:14.663 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:14.663 ' 00:27:19.920 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:19.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:19.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:19.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:19.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:19.921 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:19.921 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:19.921 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:19.921 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:19.921 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:19.921 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:19.921 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:19.921 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:19.921 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2578812 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2578812 ']' 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2578812 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578812 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578812' 00:27:19.921 killing process with pid 2578812 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2578812 00:27:19.921 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2578812 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2578812 ']' 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2578812 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2578812 ']' 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2578812 00:27:20.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2578812) - No such process 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2578812 is not found' 00:27:20.179 Process with pid 2578812 is not found 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:20.179 00:27:20.179 real 0m16.067s 00:27:20.179 user 0m33.927s 00:27:20.179 sys 0m0.790s 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.179 07:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.179 ************************************ 00:27:20.179 END TEST spdkcli_nvmf_tcp 00:27:20.179 ************************************ 00:27:20.179 07:32:52 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:20.179 07:32:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.179 07:32:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.179 07:32:52 -- common/autotest_common.sh@10 -- # set +x 00:27:20.179 ************************************ 00:27:20.179 START TEST nvmf_identify_passthru 00:27:20.179 ************************************ 00:27:20.179 07:32:52 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:20.179 * Looking for test storage... 00:27:20.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.179 07:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.179 07:32:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.179 07:32:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.179 07:32:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.179 07:32:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.179 07:32:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.179 07:32:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.179 07:32:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:20.179 07:32:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.179 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.179 07:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.179 07:32:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.179 07:32:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.179 07:32:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.179 07:32:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.180 07:32:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.180 07:32:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.180 07:32:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:20.180 07:32:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.180 07:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.180 07:32:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:20.180 07:32:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.180 07:32:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.180 07:32:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.078 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:27:22.336 00:27:22.336 --- 10.0.0.2 ping statistics --- 00:27:22.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.336 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:27:22.336 00:27:22.336 --- 10.0.0.1 ping statistics --- 00:27:22.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.336 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.336 07:32:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:27:22.336 07:32:54 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:22.336 07:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:22.594 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.778 07:32:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:26.778 07:32:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:26.778 07:32:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:26.778 07:32:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:26.778 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2583437 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:30.994 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2583437 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2583437 ']' 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:30.994 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:30.994 [2024-07-25 07:33:03.323710] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:30.994 [2024-07-25 07:33:03.323809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.994 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.994 [2024-07-25 07:33:03.387948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.994 [2024-07-25 07:33:03.499005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.994 [2024-07-25 07:33:03.499062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.994 [2024-07-25 07:33:03.499091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.994 [2024-07-25 07:33:03.499103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.994 [2024-07-25 07:33:03.499114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.994 [2024-07-25 07:33:03.499189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.994 [2024-07-25 07:33:03.499251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.994 [2024-07-25 07:33:03.499318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.994 [2024-07-25 07:33:03.499320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.251 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.251 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:27:31.251 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:31.251 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.251 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:31.251 INFO: Log level set to 20 00:27:31.251 INFO: Requests: 00:27:31.251 { 00:27:31.251 "jsonrpc": "2.0", 00:27:31.251 "method": "nvmf_set_config", 00:27:31.251 "id": 1, 00:27:31.251 "params": { 00:27:31.251 "admin_cmd_passthru": { 00:27:31.252 "identify_ctrlr": true 00:27:31.252 } 00:27:31.252 } 00:27:31.252 } 00:27:31.252 00:27:31.252 INFO: response: 00:27:31.252 { 00:27:31.252 "jsonrpc": "2.0", 00:27:31.252 "id": 1, 00:27:31.252 "result": true 00:27:31.252 } 00:27:31.252 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.252 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:31.252 INFO: Setting log level to 20 00:27:31.252 INFO: Setting log level to 20 00:27:31.252 INFO: Log level set to 20 00:27:31.252 INFO: Log level set to 20 00:27:31.252 INFO: Requests: 00:27:31.252 { 00:27:31.252 "jsonrpc": "2.0", 00:27:31.252 "method": "framework_start_init", 00:27:31.252 "id": 1 00:27:31.252 } 00:27:31.252 00:27:31.252 INFO: Requests: 00:27:31.252 { 00:27:31.252 "jsonrpc": "2.0", 00:27:31.252 "method": "framework_start_init", 00:27:31.252 "id": 1 00:27:31.252 } 00:27:31.252 00:27:31.252 [2024-07-25 07:33:03.674688] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:31.252 INFO: response: 00:27:31.252 { 00:27:31.252 "jsonrpc": "2.0", 00:27:31.252 "id": 1, 00:27:31.252 "result": true 00:27:31.252 } 00:27:31.252 00:27:31.252 INFO: response: 00:27:31.252 { 00:27:31.252 "jsonrpc": "2.0", 00:27:31.252 "id": 1, 00:27:31.252 "result": true 00:27:31.252 } 00:27:31.252 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.252 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:31.252 INFO: Setting log level to 40 00:27:31.252 INFO: Setting log level to 40 00:27:31.252 INFO: Setting log level to 40 00:27:31.252 [2024-07-25 07:33:03.684932] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.252 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:31.252 07:33:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.252 07:33:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 Nvme0n1 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 [2024-07-25 07:33:06.576901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 [ 00:27:34.528 { 00:27:34.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:34.528 "subtype": "Discovery", 00:27:34.528 "listen_addresses": [], 00:27:34.528 "allow_any_host": true, 00:27:34.528 "hosts": [] 00:27:34.528 }, 00:27:34.528 { 00:27:34.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.528 "subtype": "NVMe", 00:27:34.528 "listen_addresses": [ 00:27:34.528 { 00:27:34.528 "trtype": "TCP", 00:27:34.528 "adrfam": "IPv4", 00:27:34.528 "traddr": "10.0.0.2", 00:27:34.528 "trsvcid": "4420" 00:27:34.528 } 00:27:34.528 ], 00:27:34.528 "allow_any_host": true, 00:27:34.528 "hosts": [], 00:27:34.528 "serial_number": "SPDK00000000000001", 00:27:34.528 "model_number": "SPDK bdev Controller", 00:27:34.528 "max_namespaces": 1, 00:27:34.528 "min_cntlid": 1, 00:27:34.528 "max_cntlid": 65519, 00:27:34.528 "namespaces": [ 00:27:34.528 { 00:27:34.528 "nsid": 1, 00:27:34.528 "bdev_name": "Nvme0n1", 00:27:34.528 "name": "Nvme0n1", 00:27:34.528 "nguid": "93B214C413634BEB849529D73AD8A77F", 00:27:34.528 "uuid": "93b214c4-1363-4beb-8495-29d73ad8a77f" 00:27:34.528 } 00:27:34.528 ] 00:27:34.528 } 00:27:34.528 ] 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:34.528 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:34.528 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 07:33:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:34.528 07:33:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:34.528 07:33:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.528 07:33:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:34.528 07:33:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.528 07:33:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:34.528 07:33:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.528 07:33:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.528 rmmod nvme_tcp 00:27:34.528 rmmod nvme_fabrics 00:27:34.528 rmmod nvme_keyring 00:27:34.528 07:33:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.528 07:33:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:34.528 07:33:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:34.528 07:33:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2583437 ']' 00:27:34.528 07:33:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2583437 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2583437 ']' 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2583437 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2583437 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2583437' 00:27:34.528 killing process with pid 2583437 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2583437 00:27:34.528 07:33:07 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2583437 00:27:36.426 07:33:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.426 07:33:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.426 07:33:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.426 07:33:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.426 07:33:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.426 07:33:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.426 07:33:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:36.426 07:33:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.326 07:33:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.326 00:27:38.326 real 0m18.088s 00:27:38.326 user 0m26.816s 00:27:38.326 sys 0m2.338s 00:27:38.326 07:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:38.326 07:33:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:38.326 ************************************ 00:27:38.326 END TEST nvmf_identify_passthru 00:27:38.326 ************************************ 00:27:38.326 07:33:10 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:38.326 07:33:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:38.326 07:33:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.326 07:33:10 -- common/autotest_common.sh@10 -- # set +x 00:27:38.326 ************************************ 00:27:38.326 START TEST nvmf_dif 00:27:38.326 ************************************ 00:27:38.326 07:33:10 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:38.326 * Looking for test storage... 00:27:38.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.326 07:33:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.326 07:33:10 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.326 07:33:10 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.326 07:33:10 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.326 07:33:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 07:33:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 07:33:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 07:33:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:38.326 07:33:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.326 07:33:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:38.326 07:33:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:38.326 07:33:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:38.326 07:33:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:38.326 07:33:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.326 07:33:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:38.326 07:33:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.326 07:33:10 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.326 07:33:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.229 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.229 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.229 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.229 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:27:40.229 00:27:40.229 --- 10.0.0.2 ping statistics --- 00:27:40.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.229 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:40.229 07:33:12 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:27:40.488 00:27:40.488 --- 10.0.0.1 ping statistics --- 00:27:40.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.488 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:27:40.488 07:33:12 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.488 07:33:12 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:40.488 07:33:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:40.488 07:33:12 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:41.420 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:41.420 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:41.420 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:41.420 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:41.420 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:41.420 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:41.420 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:41.420 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:41.420 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:41.420 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:41.420 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:41.420 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:41.420 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:41.420 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:41.420 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:41.420 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:41.420 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.678 07:33:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:41.678 07:33:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2587201 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:41.678 07:33:14 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2587201 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2587201 ']' 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.678 07:33:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.678 [2024-07-25 07:33:14.099975] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:27:41.678 [2024-07-25 07:33:14.100050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.678 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.678 [2024-07-25 07:33:14.162230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.936 [2024-07-25 07:33:14.271649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.936 [2024-07-25 07:33:14.271703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.936 [2024-07-25 07:33:14.271732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.937 [2024-07-25 07:33:14.271745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.937 [2024-07-25 07:33:14.271756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.937 [2024-07-25 07:33:14.271796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:27:41.937 07:33:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.937 07:33:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.937 07:33:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:41.937 07:33:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.937 [2024-07-25 07:33:14.421763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.937 07:33:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.937 07:33:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.937 ************************************ 00:27:41.937 START TEST fio_dif_1_default 00:27:41.937 ************************************ 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.937 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.195 bdev_null0 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.195 [2024-07-25 07:33:14.486090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.195 { 00:27:42.195 "params": { 00:27:42.195 "name": "Nvme$subsystem", 00:27:42.195 "trtype": "$TEST_TRANSPORT", 00:27:42.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.195 "adrfam": "ipv4", 00:27:42.195 "trsvcid": "$NVMF_PORT", 00:27:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.195 "hdgst": ${hdgst:-false}, 00:27:42.195 "ddgst": ${ddgst:-false} 00:27:42.195 }, 00:27:42.195 "method": "bdev_nvme_attach_controller" 00:27:42.195 } 00:27:42.195 EOF 00:27:42.195 )") 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.195 "params": { 00:27:42.195 "name": "Nvme0", 00:27:42.195 "trtype": "tcp", 00:27:42.195 "traddr": "10.0.0.2", 00:27:42.195 "adrfam": "ipv4", 00:27:42.195 "trsvcid": "4420", 00:27:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.195 "hdgst": false, 00:27:42.195 "ddgst": false 00:27:42.195 }, 00:27:42.195 "method": "bdev_nvme_attach_controller" 00:27:42.195 }' 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:42.195 07:33:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:42.454 fio-3.35 00:27:42.454 Starting 1 thread 00:27:42.454 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.656 00:27:54.656 filename0: (groupid=0, jobs=1): err= 0: pid=2587426: Thu Jul 25 07:33:25 2024 00:27:54.656 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10010msec) 00:27:54.656 slat (nsec): min=6837, max=93522, avg=9497.32, stdev=3689.35 00:27:54.656 clat (usec): min=726, max=45326, avg=21043.02, stdev=20141.94 00:27:54.656 lat (usec): min=734, max=45362, avg=21052.51, stdev=20141.61 00:27:54.656 clat percentiles (usec): 00:27:54.656 | 1.00th=[ 783], 5.00th=[ 791], 10.00th=[ 799], 20.00th=[ 816], 00:27:54.656 | 30.00th=[ 881], 40.00th=[ 938], 50.00th=[41157], 60.00th=[41157], 00:27:54.656 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:54.656 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:27:54.656 | 99.99th=[45351] 00:27:54.656 bw ( KiB/s): min= 704, max= 768, per=99.84%, avg=758.40, stdev=18.28, samples=20 00:27:54.656 iops : min= 176, max= 192, avg=189.60, stdev= 4.57, samples=20 00:27:54.656 lat (usec) : 750=0.21%, 1000=48.00% 00:27:54.656 lat (msec) : 2=1.68%, 50=50.11% 00:27:54.656 cpu : usr=89.09%, sys=10.59%, ctx=26, majf=0, minf=266 00:27:54.656 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.656 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.656 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:54.656 00:27:54.656 Run status group 0 (all jobs): 00:27:54.656 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7600KiB (7782kB), run=10010-10010msec 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 00:27:54.656 real 0m11.274s 00:27:54.656 user 0m10.117s 00:27:54.656 sys 0m1.342s 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 ************************************ 00:27:54.656 END TEST fio_dif_1_default 00:27:54.656 ************************************ 00:27:54.656 07:33:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:54.656 07:33:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:54.656 07:33:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 ************************************ 00:27:54.656 START TEST fio_dif_1_multi_subsystems 00:27:54.656 ************************************ 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 bdev_null0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 [2024-07-25 07:33:25.811879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 bdev_null1 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.657 { 00:27:54.657 "params": { 00:27:54.657 "name": "Nvme$subsystem", 00:27:54.657 "trtype": "$TEST_TRANSPORT", 00:27:54.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.657 "adrfam": "ipv4", 00:27:54.657 "trsvcid": "$NVMF_PORT", 00:27:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.657 "hdgst": ${hdgst:-false}, 00:27:54.657 "ddgst": ${ddgst:-false} 00:27:54.657 }, 00:27:54.657 "method": "bdev_nvme_attach_controller" 00:27:54.657 } 00:27:54.657 EOF 00:27:54.657 )") 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.657 { 00:27:54.657 "params": { 00:27:54.657 "name": "Nvme$subsystem", 00:27:54.657 "trtype": "$TEST_TRANSPORT", 00:27:54.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.657 "adrfam": "ipv4", 00:27:54.657 "trsvcid": "$NVMF_PORT", 00:27:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.657 "hdgst": ${hdgst:-false}, 00:27:54.657 "ddgst": ${ddgst:-false} 00:27:54.657 }, 00:27:54.657 "method": "bdev_nvme_attach_controller" 00:27:54.657 } 00:27:54.657 EOF 00:27:54.657 )") 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.657 "params": { 00:27:54.657 "name": "Nvme0", 00:27:54.657 "trtype": "tcp", 00:27:54.657 "traddr": "10.0.0.2", 00:27:54.657 "adrfam": "ipv4", 00:27:54.657 "trsvcid": "4420", 00:27:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.657 "hdgst": false, 00:27:54.657 "ddgst": false 00:27:54.657 }, 00:27:54.657 "method": "bdev_nvme_attach_controller" 00:27:54.657 },{ 00:27:54.657 "params": { 00:27:54.657 "name": "Nvme1", 00:27:54.657 "trtype": "tcp", 00:27:54.657 "traddr": "10.0.0.2", 00:27:54.657 "adrfam": "ipv4", 00:27:54.657 "trsvcid": "4420", 00:27:54.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.657 "hdgst": false, 00:27:54.657 "ddgst": false 00:27:54.657 }, 00:27:54.657 "method": "bdev_nvme_attach_controller" 00:27:54.657 }' 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:54.657 07:33:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.657 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.657 fio-3.35 00:27:54.657 Starting 2 threads 00:27:54.657 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.624 00:28:04.624 filename0: (groupid=0, jobs=1): err= 0: pid=2588830: Thu Jul 25 07:33:36 2024 00:28:04.624 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:28:04.624 slat (nsec): min=6991, max=67769, avg=9467.66, stdev=4118.53 00:28:04.624 clat (usec): min=716, max=45680, avg=21026.89, stdev=20173.51 00:28:04.624 lat (usec): min=723, max=45725, avg=21036.36, stdev=20172.94 00:28:04.624 clat percentiles (usec): 00:28:04.624 | 1.00th=[ 742], 5.00th=[ 766], 10.00th=[ 791], 20.00th=[ 807], 00:28:04.624 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[41157], 60.00th=[41157], 00:28:04.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:04.624 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:28:04.624 | 99.99th=[45876] 00:28:04.624 bw ( KiB/s): min= 704, max= 768, per=57.18%, avg=761.26, stdev=20.18, samples=19 00:28:04.624 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:28:04.624 lat (usec) : 750=2.05%, 1000=47.63% 00:28:04.624 lat (msec) : 2=0.21%, 50=50.11% 00:28:04.624 cpu : usr=94.62%, sys=5.08%, ctx=16, majf=0, minf=166 00:28:04.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.624 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.624 filename1: (groupid=0, jobs=1): err= 0: pid=2588831: Thu Jul 25 07:33:36 2024 00:28:04.624 read: IOPS=142, BW=571KiB/s (585kB/s)(5712KiB/10003msec) 00:28:04.624 slat (nsec): min=6974, max=33430, avg=9466.86, stdev=3716.37 00:28:04.624 clat (usec): min=812, max=45658, avg=27989.50, stdev=18931.24 00:28:04.624 lat (usec): min=819, max=45675, avg=27998.97, stdev=18931.21 00:28:04.624 clat percentiles (usec): 00:28:04.624 | 1.00th=[ 832], 5.00th=[ 865], 10.00th=[ 881], 20.00th=[ 898], 00:28:04.624 | 30.00th=[ 930], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:04.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:28:04.624 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:28:04.624 | 99.99th=[45876] 00:28:04.624 bw ( KiB/s): min= 384, max= 768, per=42.76%, avg=569.60, stdev=187.63, samples=20 00:28:04.624 iops : min= 96, max= 192, avg=142.40, stdev=46.91, samples=20 00:28:04.624 lat (usec) : 1000=32.77% 00:28:04.624 lat (msec) : 50=67.23% 00:28:04.624 cpu : usr=94.11%, sys=5.59%, ctx=15, majf=0, minf=90 00:28:04.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.624 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.624 00:28:04.624 Run status group 0 (all jobs): 00:28:04.624 READ: bw=1331KiB/s (1363kB/s), 571KiB/s-760KiB/s (585kB/s-778kB/s), io=13.0MiB (13.6MB), run=10002-10003msec 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.624 00:28:04.624 real 0m11.346s 00:28:04.624 user 0m20.245s 00:28:04.624 sys 0m1.368s 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.624 07:33:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.624 ************************************ 00:28:04.624 END TEST fio_dif_1_multi_subsystems 00:28:04.624 ************************************ 00:28:04.624 07:33:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:04.624 07:33:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:04.624 07:33:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.624 07:33:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:04.883 ************************************ 00:28:04.883 START TEST fio_dif_rand_params 00:28:04.883 ************************************ 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.883 bdev_null0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.883 [2024-07-25 07:33:37.207497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.883 { 00:28:04.883 "params": { 00:28:04.883 "name": "Nvme$subsystem", 00:28:04.883 "trtype": "$TEST_TRANSPORT", 00:28:04.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.883 "adrfam": "ipv4", 00:28:04.883 "trsvcid": "$NVMF_PORT", 00:28:04.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.883 "hdgst": ${hdgst:-false}, 00:28:04.883 "ddgst": ${ddgst:-false} 00:28:04.883 }, 00:28:04.883 "method": "bdev_nvme_attach_controller" 00:28:04.883 } 00:28:04.883 EOF 00:28:04.883 )") 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:04.883 07:33:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:04.883 "params": { 00:28:04.883 "name": "Nvme0", 00:28:04.884 "trtype": "tcp", 00:28:04.884 "traddr": "10.0.0.2", 00:28:04.884 "adrfam": "ipv4", 00:28:04.884 "trsvcid": "4420", 00:28:04.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.884 "hdgst": false, 00:28:04.884 "ddgst": false 00:28:04.884 }, 00:28:04.884 "method": "bdev_nvme_attach_controller" 00:28:04.884 }' 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:04.884 07:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.142 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:05.142 ... 00:28:05.142 fio-3.35 00:28:05.142 Starting 3 threads 00:28:05.142 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.702 00:28:11.702 filename0: (groupid=0, jobs=1): err= 0: pid=2590230: Thu Jul 25 07:33:42 2024 00:28:11.702 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(105MiB/5028msec) 00:28:11.702 slat (nsec): min=7148, max=78202, avg=14369.58, stdev=5570.74 00:28:11.702 clat (usec): min=5916, max=96052, avg=17913.07, stdev=16500.45 00:28:11.702 lat (usec): min=5929, max=96066, avg=17927.44, stdev=16500.34 00:28:11.702 clat percentiles (usec): 00:28:11.702 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 8160], 20.00th=[ 9110], 00:28:11.702 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11600], 60.00th=[12780], 00:28:11.702 | 70.00th=[13829], 80.00th=[15139], 90.00th=[50594], 95.00th=[53216], 00:28:11.702 | 99.00th=[57934], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:28:11.702 | 99.99th=[95945] 00:28:11.702 bw ( KiB/s): min=15104, max=29184, per=29.73%, avg=21447.30, stdev=5091.55, samples=10 00:28:11.702 iops : min= 118, max= 228, avg=167.50, stdev=39.71, samples=10 00:28:11.702 lat (msec) : 10=36.27%, 20=47.56%, 50=5.35%, 100=10.82% 00:28:11.702 cpu : usr=92.92%, sys=6.62%, ctx=23, majf=0, minf=165 00:28:11.702 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.703 issued rwts: total=841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.703 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.703 filename0: (groupid=0, jobs=1): err= 0: pid=2590231: Thu Jul 25 07:33:42 2024 00:28:11.703 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(129MiB/5003msec) 00:28:11.703 slat (nsec): min=7229, max=70587, avg=15345.80, stdev=5810.42 00:28:11.703 clat (usec): min=5242, max=90198, avg=14549.14, stdev=13226.03 00:28:11.703 lat (usec): min=5254, max=90210, avg=14564.48, stdev=13226.15 00:28:11.703 clat percentiles (usec): 00:28:11.703 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7832], 00:28:11.703 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[11600], 00:28:11.703 | 70.00th=[12780], 80.00th=[13960], 90.00th=[47449], 95.00th=[50594], 00:28:11.703 | 99.00th=[54264], 99.50th=[54789], 99.90th=[89654], 99.95th=[90702], 00:28:11.703 | 99.99th=[90702] 00:28:11.703 bw ( KiB/s): min=18688, max=31488, per=36.48%, avg=26316.80, stdev=4199.23, samples=10 00:28:11.703 iops : min= 146, max= 246, avg=205.60, stdev=32.81, samples=10 00:28:11.703 lat (msec) : 10=49.22%, 20=40.10%, 50=3.69%, 100=6.99% 00:28:11.703 cpu : usr=91.66%, sys=7.88%, ctx=17, majf=0, minf=155 00:28:11.703 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.703 issued rwts: total=1030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.703 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.703 filename0: (groupid=0, jobs=1): err= 0: pid=2590232: Thu Jul 25 07:33:42 2024 00:28:11.703 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(121MiB/5031msec) 00:28:11.703 slat (nsec): min=7155, max=72105, avg=14213.68, stdev=4771.87 00:28:11.703 clat (usec): min=5301, max=90710, avg=15638.14, stdev=13697.98 00:28:11.703 lat (usec): min=5314, max=90728, avg=15652.36, stdev=13697.84 00:28:11.703 clat percentiles (usec): 00:28:11.703 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 7635], 20.00th=[ 8717], 00:28:11.703 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11994], 00:28:11.703 | 70.00th=[13042], 80.00th=[14746], 90.00th=[48497], 95.00th=[50594], 00:28:11.703 | 99.00th=[53740], 99.50th=[55313], 99.90th=[90702], 99.95th=[90702], 00:28:11.703 | 99.99th=[90702] 00:28:11.703 bw ( KiB/s): min=16896, max=29440, per=34.11%, avg=24601.60, stdev=4456.08, samples=10 00:28:11.703 iops : min= 132, max= 230, avg=192.20, stdev=34.81, samples=10 00:28:11.703 lat (msec) : 10=41.39%, 20=46.47%, 50=5.19%, 100=6.95% 00:28:11.703 cpu : usr=92.07%, sys=7.51%, ctx=12, majf=0, minf=175 00:28:11.703 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.703 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.703 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.703 00:28:11.703 Run status group 0 (all jobs): 00:28:11.703 READ: bw=70.4MiB/s (73.9MB/s), 20.9MiB/s-25.7MiB/s (21.9MB/s-27.0MB/s), io=354MiB (372MB), run=5003-5031msec 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 bdev_null0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 [2024-07-25 07:33:43.345115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 bdev_null1 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 bdev_null2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:11.703 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.704 { 00:28:11.704 "params": { 00:28:11.704 "name": "Nvme$subsystem", 00:28:11.704 "trtype": "$TEST_TRANSPORT", 00:28:11.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.704 "adrfam": "ipv4", 00:28:11.704 "trsvcid": "$NVMF_PORT", 00:28:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.704 "hdgst": ${hdgst:-false}, 00:28:11.704 "ddgst": ${ddgst:-false} 00:28:11.704 }, 00:28:11.704 "method": "bdev_nvme_attach_controller" 00:28:11.704 } 00:28:11.704 EOF 00:28:11.704 )") 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.704 { 00:28:11.704 "params": { 00:28:11.704 "name": "Nvme$subsystem", 00:28:11.704 "trtype": "$TEST_TRANSPORT", 00:28:11.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.704 "adrfam": "ipv4", 00:28:11.704 "trsvcid": "$NVMF_PORT", 00:28:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.704 "hdgst": ${hdgst:-false}, 00:28:11.704 "ddgst": ${ddgst:-false} 00:28:11.704 }, 00:28:11.704 "method": "bdev_nvme_attach_controller" 00:28:11.704 } 00:28:11.704 EOF 00:28:11.704 )") 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.704 { 00:28:11.704 "params": { 00:28:11.704 "name": "Nvme$subsystem", 00:28:11.704 "trtype": "$TEST_TRANSPORT", 00:28:11.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.704 "adrfam": "ipv4", 00:28:11.704 "trsvcid": "$NVMF_PORT", 00:28:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.704 "hdgst": ${hdgst:-false}, 00:28:11.704 "ddgst": ${ddgst:-false} 00:28:11.704 }, 00:28:11.704 "method": "bdev_nvme_attach_controller" 00:28:11.704 } 00:28:11.704 EOF 00:28:11.704 )") 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:11.704 "params": { 00:28:11.704 "name": "Nvme0", 00:28:11.704 "trtype": "tcp", 00:28:11.704 "traddr": "10.0.0.2", 00:28:11.704 "adrfam": "ipv4", 00:28:11.704 "trsvcid": "4420", 00:28:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:11.704 "hdgst": false, 00:28:11.704 "ddgst": false 00:28:11.704 }, 00:28:11.704 "method": "bdev_nvme_attach_controller" 00:28:11.704 },{ 00:28:11.704 "params": { 00:28:11.704 "name": "Nvme1", 00:28:11.704 "trtype": "tcp", 00:28:11.704 "traddr": "10.0.0.2", 00:28:11.704 "adrfam": "ipv4", 00:28:11.704 "trsvcid": "4420", 00:28:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.704 "hdgst": false, 00:28:11.704 "ddgst": false 00:28:11.704 }, 00:28:11.704 "method": "bdev_nvme_attach_controller" 00:28:11.704 },{ 00:28:11.704 "params": { 00:28:11.704 "name": "Nvme2", 00:28:11.704 "trtype": "tcp", 00:28:11.704 "traddr": "10.0.0.2", 00:28:11.704 "adrfam": "ipv4", 00:28:11.704 "trsvcid": "4420", 00:28:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:11.704 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:11.704 "hdgst": false, 00:28:11.704 "ddgst": false 00:28:11.704 }, 00:28:11.704 "method": "bdev_nvme_attach_controller" 00:28:11.704 }' 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:11.704 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:11.705 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:11.705 07:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.705 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.705 ... 00:28:11.705 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.705 ... 00:28:11.705 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.705 ... 00:28:11.705 fio-3.35 00:28:11.705 Starting 24 threads 00:28:11.705 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.901 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591099: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=485, BW=1944KiB/s (1990kB/s)(19.0MiB/10022msec) 00:28:23.901 slat (nsec): min=6183, max=82035, avg=30032.42, stdev=12446.49 00:28:23.901 clat (usec): min=13045, max=38604, avg=32669.84, stdev=2123.84 00:28:23.901 lat (usec): min=13051, max=38640, avg=32699.88, stdev=2123.77 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[21627], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.901 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.901 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:28:23.901 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[38536], 00:28:23.901 | 99.99th=[38536] 00:28:23.901 bw ( KiB/s): min= 1792, max= 2228, per=4.19%, avg=1941.80, stdev=84.14, samples=20 00:28:23.901 iops : min= 448, max= 557, avg=485.45, stdev=21.03, samples=20 00:28:23.901 lat (msec) : 20=0.62%, 50=99.38% 00:28:23.901 cpu : usr=93.66%, sys=3.51%, ctx=201, majf=0, minf=9 00:28:23.901 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:23.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591100: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=509, BW=2039KiB/s (2087kB/s)(19.9MiB/10011msec) 00:28:23.901 slat (nsec): min=8119, max=63063, avg=16712.52, stdev=9779.75 00:28:23.901 clat (usec): min=11805, max=50743, avg=31300.83, stdev=5159.39 00:28:23.901 lat (usec): min=11813, max=50776, avg=31317.54, stdev=5161.17 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[16712], 5.00th=[21890], 10.00th=[23200], 20.00th=[27395], 00:28:23.901 | 30.00th=[28443], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.901 | 70.00th=[32900], 80.00th=[33424], 90.00th=[38011], 95.00th=[38536], 00:28:23.901 | 99.00th=[43779], 99.50th=[46924], 99.90th=[50594], 99.95th=[50594], 00:28:23.901 | 99.99th=[50594] 00:28:23.901 bw ( KiB/s): min= 1664, max= 2288, per=4.40%, avg=2037.05, stdev=129.99, samples=19 00:28:23.901 iops : min= 416, max= 572, avg=509.26, stdev=32.50, samples=19 00:28:23.901 lat (msec) : 20=1.57%, 50=98.12%, 100=0.31% 00:28:23.901 cpu : usr=97.25%, sys=1.90%, ctx=56, majf=0, minf=11 00:28:23.901 IO depths : 1=0.5%, 2=1.2%, 4=5.1%, 8=78.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:28:23.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 complete : 0=0.0%, 4=89.5%, 8=8.0%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 issued rwts: total=5102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591101: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10013msec) 00:28:23.901 slat (nsec): min=5634, max=94014, avg=33562.52, stdev=13996.81 00:28:23.901 clat (usec): min=13796, max=37773, avg=32843.78, stdev=1412.56 00:28:23.901 lat (usec): min=13814, max=37829, avg=32877.34, stdev=1413.07 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.901 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.901 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:23.901 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:28:23.901 | 99.99th=[38011] 00:28:23.901 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1926.55, stdev=28.59, samples=20 00:28:23.901 iops : min= 480, max= 512, avg=481.60, stdev= 7.16, samples=20 00:28:23.901 lat (msec) : 20=0.33%, 50=99.67% 00:28:23.901 cpu : usr=98.08%, sys=1.51%, ctx=14, majf=0, minf=9 00:28:23.901 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591102: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10014msec) 00:28:23.901 slat (nsec): min=8290, max=96880, avg=36731.47, stdev=15204.44 00:28:23.901 clat (usec): min=22387, max=46138, avg=32938.79, stdev=920.80 00:28:23.901 lat (usec): min=22421, max=46193, avg=32975.52, stdev=920.13 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.901 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.901 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:28:23.901 | 99.00th=[36963], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:28:23.901 | 99.99th=[46400] 00:28:23.901 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1920.00, stdev=41.53, samples=20 00:28:23.901 iops : min= 448, max= 512, avg=480.00, stdev=10.38, samples=20 00:28:23.901 lat (msec) : 50=100.00% 00:28:23.901 cpu : usr=96.91%, sys=2.11%, ctx=136, majf=0, minf=9 00:28:23.901 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591103: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10016msec) 00:28:23.901 slat (nsec): min=8477, max=67676, avg=31475.51, stdev=9655.70 00:28:23.901 clat (usec): min=15297, max=66081, avg=32976.11, stdev=2149.48 00:28:23.901 lat (usec): min=15311, max=66115, avg=33007.59, stdev=2149.20 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[25822], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.901 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.901 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.901 | 99.00th=[41681], 99.50th=[45876], 99.90th=[52167], 99.95th=[52167], 00:28:23.901 | 99.99th=[66323] 00:28:23.901 bw ( KiB/s): min= 1715, max= 2048, per=4.14%, avg=1916.74, stdev=70.81, samples=19 00:28:23.901 iops : min= 428, max= 512, avg=478.95, stdev=17.88, samples=19 00:28:23.901 lat (msec) : 20=0.29%, 50=99.50%, 100=0.21% 00:28:23.901 cpu : usr=98.23%, sys=1.36%, ctx=14, majf=0, minf=9 00:28:23.901 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:23.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591104: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10017msec) 00:28:23.901 slat (usec): min=8, max=106, avg=33.45, stdev=20.29 00:28:23.901 clat (usec): min=21888, max=50857, avg=32995.92, stdev=1387.82 00:28:23.901 lat (usec): min=21898, max=50899, avg=33029.37, stdev=1385.96 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.901 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.901 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.901 | 99.00th=[36963], 99.50th=[37487], 99.90th=[50594], 99.95th=[50594], 00:28:23.901 | 99.99th=[51119] 00:28:23.901 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=71.93, samples=20 00:28:23.901 iops : min= 416, max= 512, avg=480.00, stdev=17.98, samples=20 00:28:23.901 lat (msec) : 50=99.67%, 100=0.33% 00:28:23.901 cpu : usr=97.78%, sys=1.81%, ctx=21, majf=0, minf=9 00:28:23.901 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.901 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.901 filename0: (groupid=0, jobs=1): err= 0: pid=2591105: Thu Jul 25 07:33:54 2024 00:28:23.901 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10017msec) 00:28:23.901 slat (usec): min=9, max=103, avg=41.11, stdev=19.91 00:28:23.901 clat (usec): min=20789, max=79341, avg=32949.40, stdev=2359.48 00:28:23.901 lat (usec): min=20825, max=79379, avg=32990.52, stdev=2357.86 00:28:23.901 clat percentiles (usec): 00:28:23.901 | 1.00th=[28443], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:23.901 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.901 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:28:23.901 | 99.00th=[37487], 99.50th=[45876], 99.90th=[60031], 99.95th=[60031], 00:28:23.901 | 99.99th=[79168] 00:28:23.901 bw ( KiB/s): min= 1632, max= 2048, per=4.14%, avg=1918.40, stdev=78.02, samples=20 00:28:23.901 iops : min= 408, max= 512, avg=479.60, stdev=19.51, samples=20 00:28:23.901 lat (msec) : 50=99.50%, 100=0.50% 00:28:23.902 cpu : usr=98.07%, sys=1.48%, ctx=19, majf=0, minf=9 00:28:23.902 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename0: (groupid=0, jobs=1): err= 0: pid=2591106: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10008msec) 00:28:23.902 slat (usec): min=7, max=119, avg=37.95, stdev=20.50 00:28:23.902 clat (usec): min=9539, max=69586, avg=32982.51, stdev=2490.76 00:28:23.902 lat (usec): min=9547, max=69625, avg=33020.46, stdev=2490.08 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.902 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.902 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:28:23.902 | 99.00th=[37487], 99.50th=[43254], 99.90th=[60556], 99.95th=[60556], 00:28:23.902 | 99.99th=[69731] 00:28:23.902 bw ( KiB/s): min= 1667, max= 2032, per=4.13%, avg=1913.42, stdev=65.19, samples=19 00:28:23.902 iops : min= 416, max= 508, avg=478.32, stdev=16.46, samples=19 00:28:23.902 lat (msec) : 10=0.29%, 20=0.33%, 50=99.00%, 100=0.37% 00:28:23.902 cpu : usr=97.97%, sys=1.62%, ctx=14, majf=0, minf=9 00:28:23.902 IO depths : 1=0.2%, 2=6.5%, 4=24.9%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename1: (groupid=0, jobs=1): err= 0: pid=2591107: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10014msec) 00:28:23.902 slat (usec): min=8, max=105, avg=47.28, stdev=20.74 00:28:23.902 clat (usec): min=21085, max=60184, avg=32876.59, stdev=2454.18 00:28:23.902 lat (usec): min=21136, max=60203, avg=32923.87, stdev=2452.81 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[26608], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:23.902 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.902 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:23.902 | 99.00th=[37487], 99.50th=[50070], 99.90th=[60031], 99.95th=[60031], 00:28:23.902 | 99.99th=[60031] 00:28:23.902 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1918.40, stdev=72.84, samples=20 00:28:23.902 iops : min= 416, max= 512, avg=479.60, stdev=18.21, samples=20 00:28:23.902 lat (msec) : 50=99.46%, 100=0.54% 00:28:23.902 cpu : usr=98.09%, sys=1.46%, ctx=12, majf=0, minf=9 00:28:23.902 IO depths : 1=4.9%, 2=11.0%, 4=24.6%, 8=51.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename1: (groupid=0, jobs=1): err= 0: pid=2591108: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:28:23.902 slat (nsec): min=8904, max=59622, avg=29096.40, stdev=9305.64 00:28:23.902 clat (usec): min=13604, max=62245, avg=32976.02, stdev=2207.31 00:28:23.902 lat (usec): min=13617, max=62277, avg=33005.11, stdev=2207.49 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.902 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.902 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.902 | 99.00th=[36963], 99.50th=[37487], 99.90th=[62129], 99.95th=[62129], 00:28:23.902 | 99.99th=[62129] 00:28:23.902 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1920.16, stdev=84.83, samples=19 00:28:23.902 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:28:23.902 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:28:23.902 cpu : usr=94.54%, sys=2.97%, ctx=216, majf=0, minf=9 00:28:23.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename1: (groupid=0, jobs=1): err= 0: pid=2591109: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=479, BW=1918KiB/s (1965kB/s)(18.8MiB/10008msec) 00:28:23.902 slat (usec): min=8, max=103, avg=41.62, stdev=17.31 00:28:23.902 clat (usec): min=22136, max=85655, avg=32992.81, stdev=2342.07 00:28:23.902 lat (usec): min=22176, max=85705, avg=33034.43, stdev=2340.13 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:23.902 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.902 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:28:23.902 | 99.00th=[36963], 99.50th=[37487], 99.90th=[65799], 99.95th=[66323], 00:28:23.902 | 99.99th=[85459] 00:28:23.902 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1913.60, stdev=65.33, samples=20 00:28:23.902 iops : min= 416, max= 512, avg=478.40, stdev=16.33, samples=20 00:28:23.902 lat (msec) : 50=99.67%, 100=0.33% 00:28:23.902 cpu : usr=98.22%, sys=1.37%, ctx=15, majf=0, minf=9 00:28:23.902 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename1: (groupid=0, jobs=1): err= 0: pid=2591110: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10023msec) 00:28:23.902 slat (nsec): min=6139, max=65248, avg=18057.10, stdev=9503.98 00:28:23.902 clat (usec): min=14062, max=37842, avg=32921.56, stdev=1804.17 00:28:23.902 lat (usec): min=14097, max=37883, avg=32939.62, stdev=1804.17 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:23.902 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:23.902 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.902 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[38011], 00:28:23.902 | 99.99th=[38011] 00:28:23.902 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1932.80, stdev=57.24, samples=20 00:28:23.902 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:28:23.902 lat (msec) : 20=0.66%, 50=99.34% 00:28:23.902 cpu : usr=94.93%, sys=2.74%, ctx=159, majf=0, minf=9 00:28:23.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename1: (groupid=0, jobs=1): err= 0: pid=2591111: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10009msec) 00:28:23.902 slat (usec): min=8, max=103, avg=34.35, stdev=14.67 00:28:23.902 clat (usec): min=15121, max=62263, avg=32966.20, stdev=2272.23 00:28:23.902 lat (usec): min=15135, max=62308, avg=33000.55, stdev=2272.47 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.902 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.902 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.902 | 99.00th=[36963], 99.50th=[44303], 99.90th=[62129], 99.95th=[62129], 00:28:23.902 | 99.99th=[62129] 00:28:23.902 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1919.32, stdev=83.57, samples=19 00:28:23.902 iops : min= 416, max= 512, avg=479.79, stdev=21.02, samples=19 00:28:23.902 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:28:23.902 cpu : usr=91.42%, sys=4.54%, ctx=432, majf=0, minf=9 00:28:23.902 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:28:23.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.902 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.902 filename1: (groupid=0, jobs=1): err= 0: pid=2591112: Thu Jul 25 07:33:54 2024 00:28:23.902 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10017msec) 00:28:23.902 slat (usec): min=8, max=101, avg=34.94, stdev=15.93 00:28:23.902 clat (usec): min=24604, max=50938, avg=32974.67, stdev=1397.96 00:28:23.902 lat (usec): min=24639, max=50966, avg=33009.61, stdev=1396.67 00:28:23.902 clat percentiles (usec): 00:28:23.902 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.902 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.902 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.903 | 99.00th=[36963], 99.50th=[37487], 99.90th=[50594], 99.95th=[51119], 00:28:23.903 | 99.99th=[51119] 00:28:23.903 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=71.93, samples=20 00:28:23.903 iops : min= 416, max= 512, avg=480.00, stdev=17.98, samples=20 00:28:23.903 lat (msec) : 50=99.67%, 100=0.33% 00:28:23.903 cpu : usr=97.86%, sys=1.72%, ctx=15, majf=0, minf=9 00:28:23.903 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename1: (groupid=0, jobs=1): err= 0: pid=2591113: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=482, BW=1928KiB/s (1974kB/s)(18.9MiB/10024msec) 00:28:23.903 slat (nsec): min=5572, max=66826, avg=25473.22, stdev=12430.47 00:28:23.903 clat (usec): min=15677, max=37614, avg=32985.29, stdev=1180.92 00:28:23.903 lat (usec): min=15740, max=37644, avg=33010.76, stdev=1179.11 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:23.903 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:23.903 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.903 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:28:23.903 | 99.99th=[37487] 00:28:23.903 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1926.40, stdev=50.44, samples=20 00:28:23.903 iops : min= 448, max= 512, avg=481.60, stdev=12.61, samples=20 00:28:23.903 lat (msec) : 20=0.29%, 50=99.71% 00:28:23.903 cpu : usr=90.98%, sys=4.31%, ctx=206, majf=0, minf=9 00:28:23.903 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename1: (groupid=0, jobs=1): err= 0: pid=2591114: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10014msec) 00:28:23.903 slat (usec): min=16, max=107, avg=46.70, stdev=18.96 00:28:23.903 clat (usec): min=21986, max=55498, avg=32823.39, stdev=1021.99 00:28:23.903 lat (usec): min=22018, max=55535, avg=32870.10, stdev=1020.48 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:23.903 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.903 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:28:23.903 | 99.00th=[36963], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:28:23.903 | 99.99th=[55313] 00:28:23.903 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1920.15, stdev=41.04, samples=20 00:28:23.903 iops : min= 448, max= 512, avg=480.00, stdev=10.38, samples=20 00:28:23.903 lat (msec) : 50=99.96%, 100=0.04% 00:28:23.903 cpu : usr=98.05%, sys=1.50%, ctx=13, majf=0, minf=9 00:28:23.903 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename2: (groupid=0, jobs=1): err= 0: pid=2591115: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=483, BW=1936KiB/s (1982kB/s)(18.9MiB/10019msec) 00:28:23.903 slat (nsec): min=8609, max=99596, avg=33129.65, stdev=19664.12 00:28:23.903 clat (usec): min=10351, max=44218, avg=32785.81, stdev=2002.50 00:28:23.903 lat (usec): min=10360, max=44252, avg=32818.94, stdev=2001.63 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.903 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.903 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:28:23.903 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[43779], 00:28:23.903 | 99.99th=[44303] 00:28:23.903 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1932.80, stdev=39.40, samples=20 00:28:23.903 iops : min= 480, max= 512, avg=483.20, stdev= 9.85, samples=20 00:28:23.903 lat (msec) : 20=0.66%, 50=99.34% 00:28:23.903 cpu : usr=98.11%, sys=1.46%, ctx=15, majf=0, minf=9 00:28:23.903 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename2: (groupid=0, jobs=1): err= 0: pid=2591116: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10011msec) 00:28:23.903 slat (usec): min=8, max=102, avg=39.82, stdev=19.64 00:28:23.903 clat (usec): min=13933, max=37666, avg=32796.79, stdev=1432.76 00:28:23.903 lat (usec): min=13994, max=37722, avg=32836.60, stdev=1430.15 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:23.903 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.903 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:23.903 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:28:23.903 | 99.99th=[37487] 00:28:23.903 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1926.40, stdev=28.62, samples=20 00:28:23.903 iops : min= 480, max= 512, avg=481.60, stdev= 7.16, samples=20 00:28:23.903 lat (msec) : 20=0.33%, 50=99.67% 00:28:23.903 cpu : usr=97.71%, sys=1.62%, ctx=100, majf=0, minf=9 00:28:23.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename2: (groupid=0, jobs=1): err= 0: pid=2591117: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10017msec) 00:28:23.903 slat (usec): min=8, max=112, avg=36.18, stdev=22.17 00:28:23.903 clat (usec): min=21354, max=50756, avg=32972.04, stdev=1420.81 00:28:23.903 lat (usec): min=21364, max=50774, avg=33008.22, stdev=1415.42 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:23.903 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.903 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.903 | 99.00th=[36963], 99.50th=[37487], 99.90th=[50594], 99.95th=[50594], 00:28:23.903 | 99.99th=[50594] 00:28:23.903 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1920.15, stdev=71.37, samples=20 00:28:23.903 iops : min= 416, max= 512, avg=480.00, stdev=17.98, samples=20 00:28:23.903 lat (msec) : 50=99.67%, 100=0.33% 00:28:23.903 cpu : usr=97.59%, sys=1.99%, ctx=19, majf=0, minf=9 00:28:23.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename2: (groupid=0, jobs=1): err= 0: pid=2591118: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10008msec) 00:28:23.903 slat (nsec): min=8121, max=63035, avg=20551.93, stdev=11592.93 00:28:23.903 clat (usec): min=11585, max=61321, avg=33199.21, stdev=2972.13 00:28:23.903 lat (usec): min=11593, max=61357, avg=33219.76, stdev=2972.85 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[22938], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:23.903 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:28:23.903 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:28:23.903 | 99.00th=[44827], 99.50th=[49546], 99.90th=[61080], 99.95th=[61080], 00:28:23.903 | 99.99th=[61080] 00:28:23.903 bw ( KiB/s): min= 1664, max= 1968, per=4.13%, avg=1914.11, stdev=63.82, samples=19 00:28:23.903 iops : min= 416, max= 492, avg=478.53, stdev=15.96, samples=19 00:28:23.903 lat (msec) : 20=0.42%, 50=99.21%, 100=0.37% 00:28:23.903 cpu : usr=98.07%, sys=1.52%, ctx=9, majf=0, minf=9 00:28:23.903 IO depths : 1=0.5%, 2=2.1%, 4=6.5%, 8=74.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 complete : 0=0.0%, 4=90.6%, 8=7.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.903 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.903 filename2: (groupid=0, jobs=1): err= 0: pid=2591119: Thu Jul 25 07:33:54 2024 00:28:23.903 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10016msec) 00:28:23.903 slat (nsec): min=8280, max=84503, avg=31845.54, stdev=10510.28 00:28:23.903 clat (usec): min=15057, max=57765, avg=32886.99, stdev=2041.42 00:28:23.903 lat (usec): min=15098, max=57799, avg=32918.84, stdev=2041.43 00:28:23.903 clat percentiles (usec): 00:28:23.903 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.903 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.904 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.904 | 99.00th=[38011], 99.50th=[43779], 99.90th=[51643], 99.95th=[57410], 00:28:23.904 | 99.99th=[57934] 00:28:23.904 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1923.47, stdev=54.84, samples=19 00:28:23.904 iops : min= 448, max= 512, avg=480.63, stdev=13.87, samples=19 00:28:23.904 lat (msec) : 20=0.37%, 50=99.40%, 100=0.23% 00:28:23.904 cpu : usr=97.97%, sys=1.60%, ctx=17, majf=0, minf=9 00:28:23.904 IO depths : 1=5.2%, 2=10.6%, 4=21.6%, 8=54.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:28:23.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 complete : 0=0.0%, 4=93.4%, 8=1.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.904 filename2: (groupid=0, jobs=1): err= 0: pid=2591120: Thu Jul 25 07:33:54 2024 00:28:23.904 read: IOPS=480, BW=1923KiB/s (1970kB/s)(18.8MiB/10007msec) 00:28:23.904 slat (usec): min=8, max=106, avg=39.87, stdev=17.63 00:28:23.904 clat (usec): min=13751, max=60908, avg=32944.64, stdev=2370.62 00:28:23.904 lat (usec): min=13764, max=60943, avg=32984.51, stdev=2370.97 00:28:23.904 clat percentiles (usec): 00:28:23.904 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.904 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:23.904 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:23.904 | 99.00th=[37487], 99.50th=[47973], 99.90th=[60556], 99.95th=[61080], 00:28:23.904 | 99.99th=[61080] 00:28:23.904 bw ( KiB/s): min= 1667, max= 2048, per=4.14%, avg=1918.47, stdev=68.44, samples=19 00:28:23.904 iops : min= 416, max= 512, avg=479.58, stdev=17.26, samples=19 00:28:23.904 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:23.904 cpu : usr=94.14%, sys=3.40%, ctx=220, majf=0, minf=9 00:28:23.904 IO depths : 1=5.4%, 2=10.9%, 4=22.1%, 8=53.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:28:23.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.904 filename2: (groupid=0, jobs=1): err= 0: pid=2591121: Thu Jul 25 07:33:54 2024 00:28:23.904 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10012msec) 00:28:23.904 slat (nsec): min=8430, max=80337, avg=32916.49, stdev=11909.56 00:28:23.904 clat (usec): min=15229, max=67529, avg=32968.05, stdev=1920.46 00:28:23.904 lat (usec): min=15237, max=67562, avg=33000.96, stdev=1920.86 00:28:23.904 clat percentiles (usec): 00:28:23.904 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:23.904 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.904 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:28:23.904 | 99.00th=[36963], 99.50th=[37487], 99.90th=[54789], 99.95th=[54789], 00:28:23.904 | 99.99th=[67634] 00:28:23.904 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=73.90, samples=19 00:28:23.904 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:28:23.904 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:28:23.904 cpu : usr=97.93%, sys=1.67%, ctx=14, majf=0, minf=9 00:28:23.904 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.904 filename2: (groupid=0, jobs=1): err= 0: pid=2591122: Thu Jul 25 07:33:54 2024 00:28:23.904 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:28:23.904 slat (nsec): min=8470, max=82424, avg=29544.79, stdev=10733.62 00:28:23.904 clat (usec): min=21709, max=59942, avg=33088.26, stdev=1915.96 00:28:23.904 lat (usec): min=21721, max=59962, avg=33117.81, stdev=1915.95 00:28:23.904 clat percentiles (usec): 00:28:23.904 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:23.904 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:23.904 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:28:23.904 | 99.00th=[37487], 99.50th=[43254], 99.90th=[60031], 99.95th=[60031], 00:28:23.904 | 99.99th=[60031] 00:28:23.904 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=84.33, samples=19 00:28:23.904 iops : min= 416, max= 512, avg=480.00, stdev=21.08, samples=19 00:28:23.904 lat (msec) : 50=99.63%, 100=0.38% 00:28:23.904 cpu : usr=97.85%, sys=1.74%, ctx=15, majf=0, minf=9 00:28:23.904 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:23.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.904 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.904 00:28:23.904 Run status group 0 (all jobs): 00:28:23.904 READ: bw=45.2MiB/s (47.4MB/s), 1918KiB/s-2039KiB/s (1965kB/s-2087kB/s), io=453MiB (475MB), run=10003-10024msec 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:23.904 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 bdev_null0 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 [2024-07-25 07:33:55.302821] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 bdev_null1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.905 { 00:28:23.905 "params": { 00:28:23.905 "name": "Nvme$subsystem", 00:28:23.905 "trtype": "$TEST_TRANSPORT", 00:28:23.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.905 "adrfam": "ipv4", 00:28:23.905 "trsvcid": "$NVMF_PORT", 00:28:23.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.905 "hdgst": ${hdgst:-false}, 00:28:23.905 "ddgst": ${ddgst:-false} 00:28:23.905 }, 00:28:23.905 "method": "bdev_nvme_attach_controller" 00:28:23.905 } 00:28:23.905 EOF 00:28:23.905 )") 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.905 { 00:28:23.905 "params": { 00:28:23.905 "name": "Nvme$subsystem", 00:28:23.905 "trtype": "$TEST_TRANSPORT", 00:28:23.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.905 "adrfam": "ipv4", 00:28:23.905 "trsvcid": "$NVMF_PORT", 00:28:23.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.905 "hdgst": ${hdgst:-false}, 00:28:23.905 "ddgst": ${ddgst:-false} 00:28:23.905 }, 00:28:23.905 "method": "bdev_nvme_attach_controller" 00:28:23.905 } 00:28:23.905 EOF 00:28:23.905 )") 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.905 "params": { 00:28:23.905 "name": "Nvme0", 00:28:23.905 "trtype": "tcp", 00:28:23.905 "traddr": "10.0.0.2", 00:28:23.905 "adrfam": "ipv4", 00:28:23.905 "trsvcid": "4420", 00:28:23.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.905 "hdgst": false, 00:28:23.905 "ddgst": false 00:28:23.905 }, 00:28:23.905 "method": "bdev_nvme_attach_controller" 00:28:23.905 },{ 00:28:23.905 "params": { 00:28:23.905 "name": "Nvme1", 00:28:23.905 "trtype": "tcp", 00:28:23.905 "traddr": "10.0.0.2", 00:28:23.905 "adrfam": "ipv4", 00:28:23.905 "trsvcid": "4420", 00:28:23.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:23.905 "hdgst": false, 00:28:23.905 "ddgst": false 00:28:23.905 }, 00:28:23.905 "method": "bdev_nvme_attach_controller" 00:28:23.905 }' 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:23.905 07:33:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.905 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:23.905 ... 00:28:23.905 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:23.905 ... 00:28:23.905 fio-3.35 00:28:23.905 Starting 4 threads 00:28:23.906 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.207 00:28:29.207 filename0: (groupid=0, jobs=1): err= 0: pid=2592497: Thu Jul 25 07:34:01 2024 00:28:29.207 read: IOPS=1887, BW=14.7MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:28:29.207 slat (nsec): min=3860, max=60540, avg=15042.58, stdev=7237.30 00:28:29.207 clat (usec): min=1540, max=7896, avg=4188.06, stdev=615.10 00:28:29.207 lat (usec): min=1561, max=7910, avg=4203.11, stdev=616.26 00:28:29.207 clat percentiles (usec): 00:28:29.207 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3687], 00:28:29.207 | 30.00th=[ 3949], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4424], 00:28:29.207 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5014], 00:28:29.207 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7570], 99.95th=[ 7898], 00:28:29.207 | 99.99th=[ 7898] 00:28:29.207 bw ( KiB/s): min=14224, max=16192, per=26.21%, avg=15118.22, stdev=676.01, samples=9 00:28:29.207 iops : min= 1778, max= 2024, avg=1890.22, stdev=85.14, samples=9 00:28:29.207 lat (msec) : 2=0.02%, 4=31.61%, 10=68.36% 00:28:29.207 cpu : usr=94.50%, sys=4.96%, ctx=19, majf=0, minf=9 00:28:29.207 IO depths : 1=0.1%, 2=10.8%, 4=61.1%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.207 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.207 issued rwts: total=9442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.207 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.207 filename0: (groupid=0, jobs=1): err= 0: pid=2592498: Thu Jul 25 07:34:01 2024 00:28:29.207 read: IOPS=1806, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5002msec) 00:28:29.207 slat (nsec): min=3967, max=67671, avg=15811.23, stdev=8640.60 00:28:29.207 clat (usec): min=1031, max=7775, avg=4371.90, stdev=645.50 00:28:29.207 lat (usec): min=1045, max=7791, avg=4387.71, stdev=646.06 00:28:29.207 clat percentiles (usec): 00:28:29.207 | 1.00th=[ 2769], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3982], 00:28:29.207 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:28:29.207 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5538], 00:28:29.207 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7701], 00:28:29.207 | 99.99th=[ 7767] 00:28:29.207 bw ( KiB/s): min=13792, max=15040, per=25.08%, avg=14469.33, stdev=365.38, samples=9 00:28:29.207 iops : min= 1724, max= 1880, avg=1808.67, stdev=45.67, samples=9 00:28:29.207 lat (msec) : 2=0.17%, 4=20.63%, 10=79.20% 00:28:29.207 cpu : usr=92.30%, sys=5.76%, ctx=215, majf=0, minf=9 00:28:29.208 IO depths : 1=0.1%, 2=10.7%, 4=61.8%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.208 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.208 issued rwts: total=9035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.208 filename1: (groupid=0, jobs=1): err= 0: pid=2592499: Thu Jul 25 07:34:01 2024 00:28:29.208 read: IOPS=1769, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5002msec) 00:28:29.208 slat (nsec): min=4880, max=67589, avg=15033.37, stdev=7833.09 00:28:29.208 clat (usec): min=1097, max=7852, avg=4469.46, stdev=653.41 00:28:29.208 lat (usec): min=1111, max=7872, avg=4484.49, stdev=653.08 00:28:29.208 clat percentiles (usec): 00:28:29.208 | 1.00th=[ 3032], 5.00th=[ 3621], 10.00th=[ 3884], 20.00th=[ 4113], 00:28:29.208 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:28:29.208 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5932], 00:28:29.208 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7832], 00:28:29.208 | 99.99th=[ 7832] 00:28:29.208 bw ( KiB/s): min=13636, max=14448, per=24.53%, avg=14148.00, stdev=275.71, samples=9 00:28:29.208 iops : min= 1704, max= 1806, avg=1768.44, stdev=34.58, samples=9 00:28:29.208 lat (msec) : 2=0.07%, 4=15.02%, 10=84.92% 00:28:29.208 cpu : usr=95.52%, sys=4.04%, ctx=7, majf=0, minf=9 00:28:29.208 IO depths : 1=0.1%, 2=9.1%, 4=64.2%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.208 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.208 issued rwts: total=8850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.208 filename1: (groupid=0, jobs=1): err= 0: pid=2592500: Thu Jul 25 07:34:01 2024 00:28:29.208 read: IOPS=1747, BW=13.7MiB/s (14.3MB/s)(68.3MiB/5001msec) 00:28:29.208 slat (nsec): min=4905, max=67501, avg=14735.56, stdev=7592.64 00:28:29.208 clat (usec): min=1042, max=8315, avg=4525.32, stdev=647.44 00:28:29.208 lat (usec): min=1056, max=8330, avg=4540.06, stdev=647.54 00:28:29.208 clat percentiles (usec): 00:28:29.208 | 1.00th=[ 3195], 5.00th=[ 3752], 10.00th=[ 3949], 20.00th=[ 4178], 00:28:29.208 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:28:29.208 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5211], 95.00th=[ 5866], 00:28:29.208 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8094], 00:28:29.208 | 99.99th=[ 8291] 00:28:29.208 bw ( KiB/s): min=13648, max=14352, per=24.30%, avg=14020.78, stdev=260.47, samples=9 00:28:29.208 iops : min= 1706, max= 1794, avg=1752.56, stdev=32.57, samples=9 00:28:29.208 lat (msec) : 2=0.14%, 4=11.59%, 10=88.27% 00:28:29.208 cpu : usr=91.74%, sys=6.28%, ctx=214, majf=0, minf=9 00:28:29.208 IO depths : 1=0.1%, 2=8.5%, 4=64.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.208 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.208 issued rwts: total=8741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.208 00:28:29.208 Run status group 0 (all jobs): 00:28:29.208 READ: bw=56.3MiB/s (59.1MB/s), 13.7MiB/s-14.7MiB/s (14.3MB/s-15.5MB/s), io=282MiB (295MB), run=5001-5002msec 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 00:28:29.208 real 0m24.297s 00:28:29.208 user 4m29.735s 00:28:29.208 sys 0m8.168s 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 ************************************ 00:28:29.208 END TEST fio_dif_rand_params 00:28:29.208 ************************************ 00:28:29.208 07:34:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:29.208 07:34:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:29.208 07:34:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 ************************************ 00:28:29.208 START TEST fio_dif_digest 00:28:29.208 ************************************ 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 bdev_null0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.208 [2024-07-25 07:34:01.545158] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.208 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:29.208 { 00:28:29.208 "params": { 00:28:29.208 "name": "Nvme$subsystem", 00:28:29.208 "trtype": "$TEST_TRANSPORT", 00:28:29.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.208 "adrfam": "ipv4", 00:28:29.208 "trsvcid": "$NVMF_PORT", 00:28:29.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.208 "hdgst": ${hdgst:-false}, 00:28:29.208 "ddgst": ${ddgst:-false} 00:28:29.208 }, 00:28:29.208 "method": "bdev_nvme_attach_controller" 00:28:29.209 } 00:28:29.209 EOF 00:28:29.209 )") 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:29.209 "params": { 00:28:29.209 "name": "Nvme0", 00:28:29.209 "trtype": "tcp", 00:28:29.209 "traddr": "10.0.0.2", 00:28:29.209 "adrfam": "ipv4", 00:28:29.209 "trsvcid": "4420", 00:28:29.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.209 "hdgst": true, 00:28:29.209 "ddgst": true 00:28:29.209 }, 00:28:29.209 "method": "bdev_nvme_attach_controller" 00:28:29.209 }' 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:29.209 07:34:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.467 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:29.467 ... 00:28:29.467 fio-3.35 00:28:29.467 Starting 3 threads 00:28:29.467 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.664 00:28:41.664 filename0: (groupid=0, jobs=1): err= 0: pid=2593256: Thu Jul 25 07:34:12 2024 00:28:41.664 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(228MiB/10045msec) 00:28:41.664 slat (nsec): min=6564, max=48602, avg=15607.14, stdev=5064.31 00:28:41.664 clat (usec): min=9397, max=97663, avg=16496.59, stdev=3423.18 00:28:41.664 lat (usec): min=9410, max=97676, avg=16512.20, stdev=3423.19 00:28:41.664 clat percentiles (usec): 00:28:41.664 | 1.00th=[10814], 5.00th=[12518], 10.00th=[14484], 20.00th=[15401], 00:28:41.664 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16581], 60.00th=[16909], 00:28:41.664 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:28:41.664 | 99.00th=[19792], 99.50th=[21627], 99.90th=[58983], 99.95th=[98042], 00:28:41.664 | 99.99th=[98042] 00:28:41.664 bw ( KiB/s): min=20224, max=25856, per=33.12%, avg=23298.20, stdev=1373.15, samples=20 00:28:41.664 iops : min= 158, max= 202, avg=182.00, stdev=10.74, samples=20 00:28:41.664 lat (msec) : 10=0.16%, 20=98.85%, 50=0.60%, 100=0.38% 00:28:41.664 cpu : usr=90.68%, sys=8.84%, ctx=22, majf=0, minf=119 00:28:41.664 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.664 issued rwts: total=1822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.664 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.664 filename0: (groupid=0, jobs=1): err= 0: pid=2593257: Thu Jul 25 07:34:12 2024 00:28:41.664 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(249MiB/10046msec) 00:28:41.664 slat (nsec): min=6554, max=45960, avg=14339.68, stdev=3894.93 00:28:41.664 clat (usec): min=9208, max=58191, avg=15112.12, stdev=2522.59 00:28:41.664 lat (usec): min=9220, max=58205, avg=15126.46, stdev=2522.56 00:28:41.664 clat percentiles (usec): 00:28:41.664 | 1.00th=[10290], 5.00th=[11731], 10.00th=[13173], 20.00th=[14091], 00:28:41.664 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:28:41.664 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:28:41.664 | 99.00th=[18220], 99.50th=[19006], 99.90th=[56886], 99.95th=[57934], 00:28:41.664 | 99.99th=[57934] 00:28:41.664 bw ( KiB/s): min=22272, max=27904, per=36.15%, avg=25433.60, stdev=1273.99, samples=20 00:28:41.664 iops : min= 174, max= 218, avg=198.70, stdev= 9.95, samples=20 00:28:41.664 lat (msec) : 10=0.55%, 20=99.20%, 100=0.25% 00:28:41.664 cpu : usr=89.82%, sys=9.36%, ctx=18, majf=0, minf=133 00:28:41.664 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.664 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.664 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.664 filename0: (groupid=0, jobs=1): err= 0: pid=2593258: Thu Jul 25 07:34:12 2024 00:28:41.664 read: IOPS=170, BW=21.3MiB/s (22.3MB/s)(214MiB/10047msec) 00:28:41.664 slat (nsec): min=6636, max=67785, avg=14337.53, stdev=4000.38 00:28:41.664 clat (usec): min=10208, max=60149, avg=17571.26, stdev=5884.56 00:28:41.664 lat (usec): min=10220, max=60162, avg=17585.60, stdev=5884.48 00:28:41.664 clat percentiles (usec): 00:28:41.664 | 1.00th=[11600], 5.00th=[14484], 10.00th=[15139], 20.00th=[15664], 00:28:41.664 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:28:41.664 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18744], 95.00th=[19530], 00:28:41.664 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58983], 99.95th=[60031], 00:28:41.664 | 99.99th=[60031] 00:28:41.664 bw ( KiB/s): min=18944, max=23808, per=31.09%, avg=21875.20, stdev=1236.06, samples=20 00:28:41.664 iops : min= 148, max= 186, avg=170.90, stdev= 9.66, samples=20 00:28:41.665 lat (msec) : 20=97.14%, 50=0.88%, 100=1.99% 00:28:41.665 cpu : usr=91.08%, sys=8.21%, ctx=27, majf=0, minf=183 00:28:41.665 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.665 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.665 00:28:41.665 Run status group 0 (all jobs): 00:28:41.665 READ: bw=68.7MiB/s (72.0MB/s), 21.3MiB/s-24.7MiB/s (22.3MB/s-25.9MB/s), io=690MiB (724MB), run=10045-10047msec 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.665 00:28:41.665 real 0m11.233s 00:28:41.665 user 0m28.510s 00:28:41.665 sys 0m2.910s 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:41.665 07:34:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.665 ************************************ 00:28:41.665 END TEST fio_dif_digest 00:28:41.665 ************************************ 00:28:41.665 07:34:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:41.665 07:34:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.665 rmmod nvme_tcp 00:28:41.665 rmmod nvme_fabrics 00:28:41.665 rmmod nvme_keyring 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2587201 ']' 00:28:41.665 07:34:12 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2587201 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2587201 ']' 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2587201 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2587201 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2587201' 00:28:41.665 killing process with pid 2587201 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2587201 00:28:41.665 07:34:12 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2587201 00:28:41.665 07:34:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:41.665 07:34:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:41.665 Waiting for block devices as requested 00:28:41.665 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:41.923 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:41.923 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:41.923 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:42.182 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:42.182 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:42.182 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:42.182 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:42.439 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:42.440 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:42.440 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:42.440 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:42.698 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:42.698 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:42.698 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:42.698 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:42.956 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:42.956 07:34:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.956 07:34:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.956 07:34:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.956 07:34:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.956 07:34:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.956 07:34:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:42.956 07:34:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.484 07:34:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:45.484 00:28:45.484 real 1m6.699s 00:28:45.484 user 6m25.511s 00:28:45.484 sys 0m20.504s 00:28:45.484 07:34:17 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.484 07:34:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:45.484 ************************************ 00:28:45.484 END TEST nvmf_dif 00:28:45.484 ************************************ 00:28:45.484 07:34:17 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:45.484 07:34:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:45.484 07:34:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.484 07:34:17 -- common/autotest_common.sh@10 -- # set +x 00:28:45.484 ************************************ 00:28:45.484 START TEST nvmf_abort_qd_sizes 00:28:45.484 ************************************ 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:45.484 * Looking for test storage... 00:28:45.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.484 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:45.485 07:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:47.386 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:47.386 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.386 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:47.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:47.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:47.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:28:47.387 00:28:47.387 --- 10.0.0.2 ping statistics --- 00:28:47.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.387 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:28:47.387 00:28:47.387 --- 10.0.0.1 ping statistics --- 00:28:47.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.387 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:47.387 07:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:48.321 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.321 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:48.321 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.321 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.321 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.321 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.579 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.579 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.579 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.579 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:49.513 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2598167 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2598167 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2598167 ']' 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.513 07:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:49.513 [2024-07-25 07:34:21.947926] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:28:49.513 [2024-07-25 07:34:21.948005] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.513 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.513 [2024-07-25 07:34:22.017918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:49.771 [2024-07-25 07:34:22.139028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.771 [2024-07-25 07:34:22.139097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.771 [2024-07-25 07:34:22.139114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.771 [2024-07-25 07:34:22.139128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.771 [2024-07-25 07:34:22.139140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.771 [2024-07-25 07:34:22.139231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.771 [2024-07-25 07:34:22.139287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.771 [2024-07-25 07:34:22.139339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.771 [2024-07-25 07:34:22.139342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:50.704 07:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.704 ************************************ 00:28:50.704 START TEST spdk_target_abort 00:28:50.704 ************************************ 00:28:50.704 07:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:28:50.704 07:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:50.704 07:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:28:50.704 07:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.704 07:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.251 spdk_targetn1 00:28:53.251 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.251 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.251 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.251 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.251 [2024-07-25 07:34:25.775074] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.509 [2024-07-25 07:34:25.807352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:53.509 07:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:53.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.785 Initializing NVMe Controllers 00:28:56.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:56.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:56.785 Initialization complete. Launching workers. 00:28:56.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11198, failed: 0 00:28:56.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1170, failed to submit 10028 00:28:56.785 success 820, unsuccess 350, failed 0 00:28:56.785 07:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:56.785 07:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.785 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.064 Initializing NVMe Controllers 00:29:00.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:00.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:00.064 Initialization complete. Launching workers. 00:29:00.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8459, failed: 0 00:29:00.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1277, failed to submit 7182 00:29:00.064 success 270, unsuccess 1007, failed 0 00:29:00.064 07:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:00.064 07:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:00.064 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.342 Initializing NVMe Controllers 00:29:03.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:03.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:03.342 Initialization complete. Launching workers. 00:29:03.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31067, failed: 0 00:29:03.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2699, failed to submit 28368 00:29:03.342 success 543, unsuccess 2156, failed 0 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.342 07:34:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2598167 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2598167 ']' 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2598167 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.274 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2598167 00:29:04.532 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:04.532 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:04.532 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2598167' 00:29:04.532 killing process with pid 2598167 00:29:04.532 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2598167 00:29:04.532 07:34:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2598167 00:29:04.791 00:29:04.791 real 0m14.145s 00:29:04.791 user 0m55.745s 00:29:04.791 sys 0m2.692s 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:04.791 ************************************ 00:29:04.791 END TEST spdk_target_abort 00:29:04.791 ************************************ 00:29:04.791 07:34:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:04.791 07:34:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.791 07:34:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.791 07:34:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:04.791 ************************************ 00:29:04.791 START TEST kernel_target_abort 00:29:04.791 ************************************ 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:04.791 07:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:05.726 Waiting for block devices as requested 00:29:05.726 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:05.984 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:05.984 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:06.242 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:06.242 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:06.242 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:06.242 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:06.499 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:06.499 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:06.499 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:06.499 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:06.757 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:06.757 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:06.757 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:07.015 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:07.015 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:07.015 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:07.272 No valid GPT data, bailing 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:07.272 00:29:07.272 Discovery Log Number of Records 2, Generation counter 2 00:29:07.272 =====Discovery Log Entry 0====== 00:29:07.272 trtype: tcp 00:29:07.272 adrfam: ipv4 00:29:07.272 subtype: current discovery subsystem 00:29:07.272 treq: not specified, sq flow control disable supported 00:29:07.272 portid: 1 00:29:07.272 trsvcid: 4420 00:29:07.272 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:07.272 traddr: 10.0.0.1 00:29:07.272 eflags: none 00:29:07.272 sectype: none 00:29:07.272 =====Discovery Log Entry 1====== 00:29:07.272 trtype: tcp 00:29:07.272 adrfam: ipv4 00:29:07.272 subtype: nvme subsystem 00:29:07.272 treq: not specified, sq flow control disable supported 00:29:07.272 portid: 1 00:29:07.272 trsvcid: 4420 00:29:07.272 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:07.272 traddr: 10.0.0.1 00:29:07.272 eflags: none 00:29:07.272 sectype: none 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.272 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:07.273 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.273 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:07.273 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.273 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.273 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:07.273 07:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.273 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.550 Initializing NVMe Controllers 00:29:10.551 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:10.551 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:10.551 Initialization complete. Launching workers. 00:29:10.551 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33513, failed: 0 00:29:10.551 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33513, failed to submit 0 00:29:10.551 success 0, unsuccess 33513, failed 0 00:29:10.551 07:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:10.551 07:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.551 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.844 Initializing NVMe Controllers 00:29:13.844 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:13.844 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:13.844 Initialization complete. Launching workers. 00:29:13.844 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63989, failed: 0 00:29:13.844 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16146, failed to submit 47843 00:29:13.844 success 0, unsuccess 16146, failed 0 00:29:13.844 07:34:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:13.844 07:34:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:13.844 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.121 Initializing NVMe Controllers 00:29:17.121 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.121 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.121 Initialization complete. Launching workers. 00:29:17.121 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64644, failed: 0 00:29:17.121 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16142, failed to submit 48502 00:29:17.121 success 0, unsuccess 16142, failed 0 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:17.121 07:34:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:18.055 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:18.055 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:18.055 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:18.989 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:18.989 00:29:18.990 real 0m14.336s 00:29:18.990 user 0m5.143s 00:29:18.990 sys 0m3.424s 00:29:18.990 07:34:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.990 07:34:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:18.990 ************************************ 00:29:18.990 END TEST kernel_target_abort 00:29:18.990 ************************************ 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:18.990 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:18.990 rmmod nvme_tcp 00:29:18.990 rmmod nvme_fabrics 00:29:18.990 rmmod nvme_keyring 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2598167 ']' 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2598167 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2598167 ']' 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2598167 00:29:19.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2598167) - No such process 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2598167 is not found' 00:29:19.248 Process with pid 2598167 is not found 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:19.248 07:34:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:20.182 Waiting for block devices as requested 00:29:20.182 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:20.442 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:20.442 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:20.442 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:20.700 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:20.700 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:20.700 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:20.700 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:20.959 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:20.959 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:20.959 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:20.959 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:21.218 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:21.218 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:21.218 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:21.218 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:21.476 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:21.476 07:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.007 07:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:24.007 00:29:24.007 real 0m38.495s 00:29:24.007 user 1m3.176s 00:29:24.007 sys 0m9.494s 00:29:24.007 07:34:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:24.007 07:34:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:24.007 ************************************ 00:29:24.007 END TEST nvmf_abort_qd_sizes 00:29:24.007 ************************************ 00:29:24.007 07:34:55 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:24.007 07:34:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:24.007 07:34:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:24.007 07:34:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.007 ************************************ 00:29:24.007 START TEST keyring_file 00:29:24.008 ************************************ 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:24.008 * Looking for test storage... 00:29:24.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.008 07:34:56 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.008 07:34:56 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.008 07:34:56 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.008 07:34:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.008 07:34:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.008 07:34:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.008 07:34:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:24.008 07:34:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t0I6vsuFb6 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t0I6vsuFb6 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t0I6vsuFb6 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.t0I6vsuFb6 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0NCzA4O7md 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:24.008 07:34:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0NCzA4O7md 00:29:24.008 07:34:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0NCzA4O7md 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0NCzA4O7md 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=2604057 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:24.008 07:34:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2604057 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2604057 ']' 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.008 07:34:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.008 [2024-07-25 07:34:56.216718] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:29:24.008 [2024-07-25 07:34:56.216821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604057 ] 00:29:24.008 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.008 [2024-07-25 07:34:56.277274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.008 [2024-07-25 07:34:56.396983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:24.267 07:34:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.267 [2024-07-25 07:34:56.664092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.267 null0 00:29:24.267 [2024-07-25 07:34:56.696140] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:24.267 [2024-07-25 07:34:56.696675] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:24.267 [2024-07-25 07:34:56.704145] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.267 07:34:56 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.267 [2024-07-25 07:34:56.716162] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:24.267 request: 00:29:24.267 { 00:29:24.267 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.267 "secure_channel": false, 00:29:24.267 "listen_address": { 00:29:24.267 "trtype": "tcp", 00:29:24.267 "traddr": "127.0.0.1", 00:29:24.267 "trsvcid": "4420" 00:29:24.267 }, 00:29:24.267 "method": "nvmf_subsystem_add_listener", 00:29:24.267 "req_id": 1 00:29:24.267 } 00:29:24.267 Got JSON-RPC error response 00:29:24.267 response: 00:29:24.267 { 00:29:24.267 "code": -32602, 00:29:24.267 "message": "Invalid parameters" 00:29:24.267 } 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:24.267 07:34:56 keyring_file -- keyring/file.sh@46 -- # bperfpid=2604063 00:29:24.267 07:34:56 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2604063 /var/tmp/bperf.sock 00:29:24.267 07:34:56 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2604063 ']' 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.267 07:34:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.267 [2024-07-25 07:34:56.763686] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:29:24.267 [2024-07-25 07:34:56.763790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604063 ] 00:29:24.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.525 [2024-07-25 07:34:56.824168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.525 [2024-07-25 07:34:56.937467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.525 07:34:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.525 07:34:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:24.525 07:34:57 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:24.525 07:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:24.783 07:34:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0NCzA4O7md 00:29:24.783 07:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0NCzA4O7md 00:29:25.041 07:34:57 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:25.041 07:34:57 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:25.041 07:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.041 07:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.041 07:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.299 07:34:57 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.t0I6vsuFb6 == \/\t\m\p\/\t\m\p\.\t\0\I\6\v\s\u\F\b\6 ]] 00:29:25.299 07:34:57 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:25.299 07:34:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:25.299 07:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.299 07:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:25.299 07:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.557 07:34:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0NCzA4O7md == \/\t\m\p\/\t\m\p\.\0\N\C\z\A\4\O\7\m\d ]] 00:29:25.557 07:34:58 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:25.557 07:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:25.557 07:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.557 07:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.557 07:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.557 07:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.857 07:34:58 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:25.857 07:34:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:25.857 07:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:25.857 07:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.857 07:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.857 07:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.857 07:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:26.115 07:34:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:26.115 07:34:58 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.115 07:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.373 [2024-07-25 07:34:58.771655] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:26.373 nvme0n1 00:29:26.373 07:34:58 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:26.373 07:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:26.373 07:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.373 07:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.373 07:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.373 07:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:26.631 07:34:59 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:26.631 07:34:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:26.631 07:34:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:26.631 07:34:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.631 07:34:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.631 07:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.631 07:34:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:26.889 07:34:59 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:26.889 07:34:59 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.147 Running I/O for 1 seconds... 00:29:28.080 00:29:28.080 Latency(us) 00:29:28.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.080 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:28.080 nvme0n1 : 1.01 4954.17 19.35 0.00 0.00 25707.71 10291.58 40583.77 00:29:28.080 =================================================================================================================== 00:29:28.080 Total : 4954.17 19.35 0.00 0.00 25707.71 10291.58 40583.77 00:29:28.080 0 00:29:28.080 07:35:00 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:28.081 07:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:28.338 07:35:00 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:28.338 07:35:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:28.338 07:35:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.338 07:35:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.338 07:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.339 07:35:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:28.596 07:35:01 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:28.596 07:35:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:28.596 07:35:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:28.596 07:35:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.596 07:35:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.596 07:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.596 07:35:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:28.854 07:35:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:28.854 07:35:01 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:28.854 07:35:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.854 07:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:29.112 [2024-07-25 07:35:01.501334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:29.112 [2024-07-25 07:35:01.501395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bc200 (107): Transport endpoint is not connected 00:29:29.112 [2024-07-25 07:35:01.502387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bc200 (9): Bad file descriptor 00:29:29.112 [2024-07-25 07:35:01.503386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:29.112 [2024-07-25 07:35:01.503408] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:29.112 [2024-07-25 07:35:01.503423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:29.112 request: 00:29:29.112 { 00:29:29.112 "name": "nvme0", 00:29:29.112 "trtype": "tcp", 00:29:29.112 "traddr": "127.0.0.1", 00:29:29.112 "adrfam": "ipv4", 00:29:29.112 "trsvcid": "4420", 00:29:29.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.112 "prchk_reftag": false, 00:29:29.112 "prchk_guard": false, 00:29:29.112 "hdgst": false, 00:29:29.112 "ddgst": false, 00:29:29.112 "psk": "key1", 00:29:29.112 "method": "bdev_nvme_attach_controller", 00:29:29.112 "req_id": 1 00:29:29.112 } 00:29:29.112 Got JSON-RPC error response 00:29:29.112 response: 00:29:29.112 { 00:29:29.112 "code": -5, 00:29:29.112 "message": "Input/output error" 00:29:29.112 } 00:29:29.112 07:35:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:29.112 07:35:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:29.112 07:35:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:29.112 07:35:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:29.112 07:35:01 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:29.112 07:35:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:29.112 07:35:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:29.112 07:35:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:29.112 07:35:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:29.112 07:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.370 07:35:01 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:29.370 07:35:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:29.370 07:35:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:29.370 07:35:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:29.370 07:35:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:29.370 07:35:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:29.370 07:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.628 07:35:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:29.628 07:35:02 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:29.628 07:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:29.886 07:35:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:29.886 07:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:30.144 07:35:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:30.144 07:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.144 07:35:02 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:30.402 07:35:02 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:30.402 07:35:02 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.t0I6vsuFb6 00:29:30.402 07:35:02 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.402 07:35:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:30.402 07:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:30.660 [2024-07-25 07:35:03.024355] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.t0I6vsuFb6': 0100660 00:29:30.660 [2024-07-25 07:35:03.024391] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:30.660 request: 00:29:30.660 { 00:29:30.660 "name": "key0", 00:29:30.660 "path": "/tmp/tmp.t0I6vsuFb6", 00:29:30.660 "method": "keyring_file_add_key", 00:29:30.660 "req_id": 1 00:29:30.660 } 00:29:30.660 Got JSON-RPC error response 00:29:30.660 response: 00:29:30.660 { 00:29:30.660 "code": -1, 00:29:30.660 "message": "Operation not permitted" 00:29:30.660 } 00:29:30.660 07:35:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:30.660 07:35:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:30.660 07:35:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:30.660 07:35:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:30.660 07:35:03 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.t0I6vsuFb6 00:29:30.660 07:35:03 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:30.660 07:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t0I6vsuFb6 00:29:30.918 07:35:03 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.t0I6vsuFb6 00:29:30.918 07:35:03 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:30.918 07:35:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:30.918 07:35:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:30.918 07:35:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:30.918 07:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.918 07:35:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:31.175 07:35:03 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:31.175 07:35:03 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:31.175 07:35:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.175 07:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.433 [2024-07-25 07:35:03.758366] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.t0I6vsuFb6': No such file or directory 00:29:31.433 [2024-07-25 07:35:03.758403] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:31.433 [2024-07-25 07:35:03.758440] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:31.433 [2024-07-25 07:35:03.758461] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:31.433 [2024-07-25 07:35:03.758474] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:31.433 request: 00:29:31.433 { 00:29:31.433 "name": "nvme0", 00:29:31.433 "trtype": "tcp", 00:29:31.433 "traddr": "127.0.0.1", 00:29:31.433 "adrfam": "ipv4", 00:29:31.433 "trsvcid": "4420", 00:29:31.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:31.433 "prchk_reftag": false, 00:29:31.433 "prchk_guard": false, 00:29:31.433 "hdgst": false, 00:29:31.433 "ddgst": false, 00:29:31.433 "psk": "key0", 00:29:31.433 "method": "bdev_nvme_attach_controller", 00:29:31.433 "req_id": 1 00:29:31.433 } 00:29:31.433 Got JSON-RPC error response 00:29:31.433 response: 00:29:31.433 { 00:29:31.433 "code": -19, 00:29:31.433 "message": "No such device" 00:29:31.433 } 00:29:31.433 07:35:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:31.433 07:35:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:31.433 07:35:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:31.433 07:35:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:31.433 07:35:03 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:31.433 07:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:31.691 07:35:04 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.afRAnGtMCp 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:31.691 07:35:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:31.691 07:35:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:31.691 07:35:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:31.691 07:35:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:31.691 07:35:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:31.691 07:35:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.afRAnGtMCp 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.afRAnGtMCp 00:29:31.691 07:35:04 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.afRAnGtMCp 00:29:31.691 07:35:04 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.afRAnGtMCp 00:29:31.691 07:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.afRAnGtMCp 00:29:31.949 07:35:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.949 07:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.205 nvme0n1 00:29:32.205 07:35:04 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:32.205 07:35:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.205 07:35:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.205 07:35:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.205 07:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.205 07:35:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.462 07:35:04 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:32.462 07:35:04 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:32.462 07:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:32.719 07:35:05 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:32.719 07:35:05 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:32.719 07:35:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.719 07:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.719 07:35:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.976 07:35:05 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:32.976 07:35:05 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:32.976 07:35:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.976 07:35:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.976 07:35:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.976 07:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.977 07:35:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.234 07:35:05 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:33.234 07:35:05 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:33.234 07:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:33.492 07:35:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:33.492 07:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.492 07:35:05 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:33.750 07:35:06 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:33.750 07:35:06 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.afRAnGtMCp 00:29:33.750 07:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.afRAnGtMCp 00:29:34.008 07:35:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0NCzA4O7md 00:29:34.008 07:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0NCzA4O7md 00:29:34.266 07:35:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:34.266 07:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:34.523 nvme0n1 00:29:34.523 07:35:06 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:34.523 07:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:34.781 07:35:07 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:34.781 "subsystems": [ 00:29:34.781 { 00:29:34.781 "subsystem": "keyring", 00:29:34.781 "config": [ 00:29:34.781 { 00:29:34.781 "method": "keyring_file_add_key", 00:29:34.781 "params": { 00:29:34.781 "name": "key0", 00:29:34.781 "path": "/tmp/tmp.afRAnGtMCp" 00:29:34.781 } 00:29:34.781 }, 00:29:34.781 { 00:29:34.781 "method": "keyring_file_add_key", 00:29:34.782 "params": { 00:29:34.782 "name": "key1", 00:29:34.782 "path": "/tmp/tmp.0NCzA4O7md" 00:29:34.782 } 00:29:34.782 } 00:29:34.782 ] 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "subsystem": "iobuf", 00:29:34.782 "config": [ 00:29:34.782 { 00:29:34.782 "method": "iobuf_set_options", 00:29:34.782 "params": { 00:29:34.782 "small_pool_count": 8192, 00:29:34.782 "large_pool_count": 1024, 00:29:34.782 "small_bufsize": 8192, 00:29:34.782 "large_bufsize": 135168 00:29:34.782 } 00:29:34.782 } 00:29:34.782 ] 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "subsystem": "sock", 00:29:34.782 "config": [ 00:29:34.782 { 00:29:34.782 "method": "sock_set_default_impl", 00:29:34.782 "params": { 00:29:34.782 "impl_name": "posix" 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "sock_impl_set_options", 00:29:34.782 "params": { 00:29:34.782 "impl_name": "ssl", 00:29:34.782 "recv_buf_size": 4096, 00:29:34.782 "send_buf_size": 4096, 00:29:34.782 "enable_recv_pipe": true, 00:29:34.782 "enable_quickack": false, 00:29:34.782 "enable_placement_id": 0, 00:29:34.782 "enable_zerocopy_send_server": true, 00:29:34.782 "enable_zerocopy_send_client": false, 00:29:34.782 "zerocopy_threshold": 0, 00:29:34.782 "tls_version": 0, 00:29:34.782 "enable_ktls": false 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "sock_impl_set_options", 00:29:34.782 "params": { 00:29:34.782 "impl_name": "posix", 00:29:34.782 "recv_buf_size": 2097152, 00:29:34.782 "send_buf_size": 2097152, 00:29:34.782 "enable_recv_pipe": true, 00:29:34.782 "enable_quickack": false, 00:29:34.782 "enable_placement_id": 0, 00:29:34.782 "enable_zerocopy_send_server": true, 00:29:34.782 "enable_zerocopy_send_client": false, 00:29:34.782 "zerocopy_threshold": 0, 00:29:34.782 "tls_version": 0, 00:29:34.782 "enable_ktls": false 00:29:34.782 } 00:29:34.782 } 00:29:34.782 ] 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "subsystem": "vmd", 00:29:34.782 "config": [] 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "subsystem": "accel", 00:29:34.782 "config": [ 00:29:34.782 { 00:29:34.782 "method": "accel_set_options", 00:29:34.782 "params": { 00:29:34.782 "small_cache_size": 128, 00:29:34.782 "large_cache_size": 16, 00:29:34.782 "task_count": 2048, 00:29:34.782 "sequence_count": 2048, 00:29:34.782 "buf_count": 2048 00:29:34.782 } 00:29:34.782 } 00:29:34.782 ] 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "subsystem": "bdev", 00:29:34.782 "config": [ 00:29:34.782 { 00:29:34.782 "method": "bdev_set_options", 00:29:34.782 "params": { 00:29:34.782 "bdev_io_pool_size": 65535, 00:29:34.782 "bdev_io_cache_size": 256, 00:29:34.782 "bdev_auto_examine": true, 00:29:34.782 "iobuf_small_cache_size": 128, 00:29:34.782 "iobuf_large_cache_size": 16 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "bdev_raid_set_options", 00:29:34.782 "params": { 00:29:34.782 "process_window_size_kb": 1024, 00:29:34.782 "process_max_bandwidth_mb_sec": 0 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "bdev_iscsi_set_options", 00:29:34.782 "params": { 00:29:34.782 "timeout_sec": 30 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "bdev_nvme_set_options", 00:29:34.782 "params": { 00:29:34.782 "action_on_timeout": "none", 00:29:34.782 "timeout_us": 0, 00:29:34.782 "timeout_admin_us": 0, 00:29:34.782 "keep_alive_timeout_ms": 10000, 00:29:34.782 "arbitration_burst": 0, 00:29:34.782 "low_priority_weight": 0, 00:29:34.782 "medium_priority_weight": 0, 00:29:34.782 "high_priority_weight": 0, 00:29:34.782 "nvme_adminq_poll_period_us": 10000, 00:29:34.782 "nvme_ioq_poll_period_us": 0, 00:29:34.782 "io_queue_requests": 512, 00:29:34.782 "delay_cmd_submit": true, 00:29:34.782 "transport_retry_count": 4, 00:29:34.782 "bdev_retry_count": 3, 00:29:34.782 "transport_ack_timeout": 0, 00:29:34.782 "ctrlr_loss_timeout_sec": 0, 00:29:34.782 "reconnect_delay_sec": 0, 00:29:34.782 "fast_io_fail_timeout_sec": 0, 00:29:34.782 "disable_auto_failback": false, 00:29:34.782 "generate_uuids": false, 00:29:34.782 "transport_tos": 0, 00:29:34.782 "nvme_error_stat": false, 00:29:34.782 "rdma_srq_size": 0, 00:29:34.782 "io_path_stat": false, 00:29:34.782 "allow_accel_sequence": false, 00:29:34.782 "rdma_max_cq_size": 0, 00:29:34.782 "rdma_cm_event_timeout_ms": 0, 00:29:34.782 "dhchap_digests": [ 00:29:34.782 "sha256", 00:29:34.782 "sha384", 00:29:34.782 "sha512" 00:29:34.782 ], 00:29:34.782 "dhchap_dhgroups": [ 00:29:34.782 "null", 00:29:34.782 "ffdhe2048", 00:29:34.782 "ffdhe3072", 00:29:34.782 "ffdhe4096", 00:29:34.782 "ffdhe6144", 00:29:34.782 "ffdhe8192" 00:29:34.782 ] 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "bdev_nvme_attach_controller", 00:29:34.782 "params": { 00:29:34.782 "name": "nvme0", 00:29:34.782 "trtype": "TCP", 00:29:34.782 "adrfam": "IPv4", 00:29:34.782 "traddr": "127.0.0.1", 00:29:34.782 "trsvcid": "4420", 00:29:34.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.782 "prchk_reftag": false, 00:29:34.782 "prchk_guard": false, 00:29:34.782 "ctrlr_loss_timeout_sec": 0, 00:29:34.782 "reconnect_delay_sec": 0, 00:29:34.782 "fast_io_fail_timeout_sec": 0, 00:29:34.782 "psk": "key0", 00:29:34.782 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:34.782 "hdgst": false, 00:29:34.782 "ddgst": false 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "bdev_nvme_set_hotplug", 00:29:34.782 "params": { 00:29:34.782 "period_us": 100000, 00:29:34.782 "enable": false 00:29:34.782 } 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "method": "bdev_wait_for_examine" 00:29:34.782 } 00:29:34.782 ] 00:29:34.782 }, 00:29:34.782 { 00:29:34.782 "subsystem": "nbd", 00:29:34.782 "config": [] 00:29:34.782 } 00:29:34.782 ] 00:29:34.782 }' 00:29:34.782 07:35:07 keyring_file -- keyring/file.sh@114 -- # killprocess 2604063 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2604063 ']' 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2604063 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2604063 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2604063' 00:29:34.782 killing process with pid 2604063 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@969 -- # kill 2604063 00:29:34.782 Received shutdown signal, test time was about 1.000000 seconds 00:29:34.782 00:29:34.782 Latency(us) 00:29:34.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.782 =================================================================================================================== 00:29:34.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.782 07:35:07 keyring_file -- common/autotest_common.sh@974 -- # wait 2604063 00:29:35.349 07:35:07 keyring_file -- keyring/file.sh@117 -- # bperfpid=2605521 00:29:35.349 07:35:07 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2605521 /var/tmp/bperf.sock 00:29:35.349 07:35:07 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2605521 ']' 00:29:35.349 07:35:07 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.349 07:35:07 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:35.349 07:35:07 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.349 07:35:07 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.349 07:35:07 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:35.349 "subsystems": [ 00:29:35.349 { 00:29:35.349 "subsystem": "keyring", 00:29:35.349 "config": [ 00:29:35.349 { 00:29:35.349 "method": "keyring_file_add_key", 00:29:35.349 "params": { 00:29:35.349 "name": "key0", 00:29:35.349 "path": "/tmp/tmp.afRAnGtMCp" 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "keyring_file_add_key", 00:29:35.349 "params": { 00:29:35.349 "name": "key1", 00:29:35.349 "path": "/tmp/tmp.0NCzA4O7md" 00:29:35.349 } 00:29:35.349 } 00:29:35.349 ] 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "subsystem": "iobuf", 00:29:35.349 "config": [ 00:29:35.349 { 00:29:35.349 "method": "iobuf_set_options", 00:29:35.349 "params": { 00:29:35.349 "small_pool_count": 8192, 00:29:35.349 "large_pool_count": 1024, 00:29:35.349 "small_bufsize": 8192, 00:29:35.349 "large_bufsize": 135168 00:29:35.349 } 00:29:35.349 } 00:29:35.349 ] 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "subsystem": "sock", 00:29:35.349 "config": [ 00:29:35.349 { 00:29:35.349 "method": "sock_set_default_impl", 00:29:35.349 "params": { 00:29:35.349 "impl_name": "posix" 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "sock_impl_set_options", 00:29:35.349 "params": { 00:29:35.349 "impl_name": "ssl", 00:29:35.349 "recv_buf_size": 4096, 00:29:35.349 "send_buf_size": 4096, 00:29:35.349 "enable_recv_pipe": true, 00:29:35.349 "enable_quickack": false, 00:29:35.349 "enable_placement_id": 0, 00:29:35.349 "enable_zerocopy_send_server": true, 00:29:35.349 "enable_zerocopy_send_client": false, 00:29:35.349 "zerocopy_threshold": 0, 00:29:35.349 "tls_version": 0, 00:29:35.349 "enable_ktls": false 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "sock_impl_set_options", 00:29:35.349 "params": { 00:29:35.349 "impl_name": "posix", 00:29:35.349 "recv_buf_size": 2097152, 00:29:35.349 "send_buf_size": 2097152, 00:29:35.349 "enable_recv_pipe": true, 00:29:35.349 "enable_quickack": false, 00:29:35.349 "enable_placement_id": 0, 00:29:35.349 "enable_zerocopy_send_server": true, 00:29:35.349 "enable_zerocopy_send_client": false, 00:29:35.349 "zerocopy_threshold": 0, 00:29:35.349 "tls_version": 0, 00:29:35.349 "enable_ktls": false 00:29:35.349 } 00:29:35.349 } 00:29:35.349 ] 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "subsystem": "vmd", 00:29:35.349 "config": [] 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "subsystem": "accel", 00:29:35.349 "config": [ 00:29:35.349 { 00:29:35.349 "method": "accel_set_options", 00:29:35.349 "params": { 00:29:35.349 "small_cache_size": 128, 00:29:35.349 "large_cache_size": 16, 00:29:35.349 "task_count": 2048, 00:29:35.349 "sequence_count": 2048, 00:29:35.349 "buf_count": 2048 00:29:35.349 } 00:29:35.349 } 00:29:35.349 ] 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "subsystem": "bdev", 00:29:35.349 "config": [ 00:29:35.349 { 00:29:35.349 "method": "bdev_set_options", 00:29:35.349 "params": { 00:29:35.349 "bdev_io_pool_size": 65535, 00:29:35.349 "bdev_io_cache_size": 256, 00:29:35.349 "bdev_auto_examine": true, 00:29:35.349 "iobuf_small_cache_size": 128, 00:29:35.349 "iobuf_large_cache_size": 16 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "bdev_raid_set_options", 00:29:35.349 "params": { 00:29:35.349 "process_window_size_kb": 1024, 00:29:35.349 "process_max_bandwidth_mb_sec": 0 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "bdev_iscsi_set_options", 00:29:35.349 "params": { 00:29:35.349 "timeout_sec": 30 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "bdev_nvme_set_options", 00:29:35.349 "params": { 00:29:35.349 "action_on_timeout": "none", 00:29:35.349 "timeout_us": 0, 00:29:35.349 "timeout_admin_us": 0, 00:29:35.349 "keep_alive_timeout_ms": 10000, 00:29:35.349 "arbitration_burst": 0, 00:29:35.349 "low_priority_weight": 0, 00:29:35.349 "medium_priority_weight": 0, 00:29:35.349 "high_priority_weight": 0, 00:29:35.349 "nvme_adminq_poll_period_us": 10000, 00:29:35.349 "nvme_ioq_poll_period_us": 0, 00:29:35.349 "io_queue_requests": 512, 00:29:35.349 "delay_cmd_submit": true, 00:29:35.349 "transport_retry_count": 4, 00:29:35.349 "bdev_retry_count": 3, 00:29:35.349 "transport_ack_timeout": 0, 00:29:35.349 "ctrlr_loss_timeout_sec": 0, 00:29:35.349 "reconnect_delay_sec": 0, 00:29:35.349 "fast_io_fail_timeout_sec": 0, 00:29:35.349 "disable_auto_failback": false, 00:29:35.349 "generate_uuids": false, 00:29:35.349 "transport_tos": 0, 00:29:35.349 "nvme_error_stat": false, 00:29:35.349 "rdma_srq_size": 0, 00:29:35.349 "io_path_stat": false, 00:29:35.349 "allow_accel_sequence": false, 00:29:35.349 "rdma_max_cq_size": 0, 00:29:35.349 "rdma_cm_event_timeout_ms": 0, 00:29:35.349 "dhchap_digests": [ 00:29:35.349 "sha256", 00:29:35.349 "sha384", 00:29:35.349 "sha512" 00:29:35.349 ], 00:29:35.349 "dhchap_dhgroups": [ 00:29:35.349 "null", 00:29:35.349 "ffdhe2048", 00:29:35.349 "ffdhe3072", 00:29:35.349 "ffdhe4096", 00:29:35.349 "ffdhe6144", 00:29:35.349 "ffdhe8192" 00:29:35.349 ] 00:29:35.349 } 00:29:35.349 }, 00:29:35.349 { 00:29:35.349 "method": "bdev_nvme_attach_controller", 00:29:35.349 "params": { 00:29:35.349 "name": "nvme0", 00:29:35.349 "trtype": "TCP", 00:29:35.350 "adrfam": "IPv4", 00:29:35.350 "traddr": "127.0.0.1", 00:29:35.350 "trsvcid": "4420", 00:29:35.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.350 "prchk_reftag": false, 00:29:35.350 "prchk_guard": false, 00:29:35.350 "ctrlr_loss_timeout_sec": 0, 00:29:35.350 "reconnect_delay_sec": 0, 00:29:35.350 "fast_io_fail_timeout_sec": 0, 00:29:35.350 "psk": "key0", 00:29:35.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.350 "hdgst": false, 00:29:35.350 "ddgst": false 00:29:35.350 } 00:29:35.350 }, 00:29:35.350 { 00:29:35.350 "method": "bdev_nvme_set_hotplug", 00:29:35.350 "params": { 00:29:35.350 "period_us": 100000, 00:29:35.350 "enable": false 00:29:35.350 } 00:29:35.350 }, 00:29:35.350 { 00:29:35.350 "method": "bdev_wait_for_examine" 00:29:35.350 } 00:29:35.350 ] 00:29:35.350 }, 00:29:35.350 { 00:29:35.350 "subsystem": "nbd", 00:29:35.350 "config": [] 00:29:35.350 } 00:29:35.350 ] 00:29:35.350 }' 00:29:35.350 07:35:07 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.350 07:35:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:35.350 [2024-07-25 07:35:07.615203] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:29:35.350 [2024-07-25 07:35:07.615324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605521 ] 00:29:35.350 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.350 [2024-07-25 07:35:07.676386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.350 [2024-07-25 07:35:07.791202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.608 [2024-07-25 07:35:07.974872] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:36.174 07:35:08 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.174 07:35:08 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:36.174 07:35:08 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:36.174 07:35:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.174 07:35:08 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:36.432 07:35:08 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:36.432 07:35:08 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:36.432 07:35:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:36.432 07:35:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.432 07:35:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.432 07:35:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.432 07:35:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:36.690 07:35:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:36.690 07:35:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:36.690 07:35:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:36.690 07:35:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.690 07:35:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.690 07:35:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.690 07:35:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:36.947 07:35:09 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:36.948 07:35:09 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:36.948 07:35:09 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:36.948 07:35:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:37.206 07:35:09 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:37.206 07:35:09 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:37.206 07:35:09 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.afRAnGtMCp /tmp/tmp.0NCzA4O7md 00:29:37.206 07:35:09 keyring_file -- keyring/file.sh@20 -- # killprocess 2605521 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2605521 ']' 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2605521 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2605521 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2605521' 00:29:37.206 killing process with pid 2605521 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@969 -- # kill 2605521 00:29:37.206 Received shutdown signal, test time was about 1.000000 seconds 00:29:37.206 00:29:37.206 Latency(us) 00:29:37.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.206 =================================================================================================================== 00:29:37.206 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:37.206 07:35:09 keyring_file -- common/autotest_common.sh@974 -- # wait 2605521 00:29:37.465 07:35:09 keyring_file -- keyring/file.sh@21 -- # killprocess 2604057 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2604057 ']' 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2604057 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2604057 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2604057' 00:29:37.465 killing process with pid 2604057 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@969 -- # kill 2604057 00:29:37.465 [2024-07-25 07:35:09.887637] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:37.465 07:35:09 keyring_file -- common/autotest_common.sh@974 -- # wait 2604057 00:29:38.031 00:29:38.031 real 0m14.323s 00:29:38.031 user 0m35.401s 00:29:38.031 sys 0m3.156s 00:29:38.031 07:35:10 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:38.031 07:35:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:38.031 ************************************ 00:29:38.031 END TEST keyring_file 00:29:38.031 ************************************ 00:29:38.031 07:35:10 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:29:38.031 07:35:10 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:38.031 07:35:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:38.031 07:35:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.031 07:35:10 -- common/autotest_common.sh@10 -- # set +x 00:29:38.031 ************************************ 00:29:38.031 START TEST keyring_linux 00:29:38.031 ************************************ 00:29:38.031 07:35:10 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:38.031 * Looking for test storage... 00:29:38.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.031 07:35:10 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.031 07:35:10 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.031 07:35:10 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.031 07:35:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.031 07:35:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.031 07:35:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.031 07:35:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:38.031 07:35:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:38.031 /tmp/:spdk-test:key0 00:29:38.031 07:35:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:38.031 07:35:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:38.031 07:35:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:38.032 07:35:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:38.032 /tmp/:spdk-test:key1 00:29:38.032 07:35:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2605890 00:29:38.032 07:35:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:38.032 07:35:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2605890 00:29:38.032 07:35:10 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2605890 ']' 00:29:38.032 07:35:10 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.032 07:35:10 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.032 07:35:10 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.032 07:35:10 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.032 07:35:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:38.289 [2024-07-25 07:35:10.591407] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:29:38.289 [2024-07-25 07:35:10.591496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605890 ] 00:29:38.289 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.289 [2024-07-25 07:35:10.652069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.289 [2024-07-25 07:35:10.769718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.547 07:35:11 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.547 07:35:11 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:38.547 07:35:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:38.547 07:35:11 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.547 07:35:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:38.547 [2024-07-25 07:35:11.035307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.547 null0 00:29:38.547 [2024-07-25 07:35:11.067362] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:38.547 [2024-07-25 07:35:11.067861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:38.805 07:35:11 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.805 07:35:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:38.805 47876345 00:29:38.805 07:35:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:38.805 878814310 00:29:38.805 07:35:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2606022 00:29:38.805 07:35:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:38.805 07:35:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2606022 /var/tmp/bperf.sock 00:29:38.805 07:35:11 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2606022 ']' 00:29:38.805 07:35:11 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.805 07:35:11 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.805 07:35:11 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.806 07:35:11 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.806 07:35:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:38.806 [2024-07-25 07:35:11.132785] Starting SPDK v24.09-pre git sha1 e5ef9abc9 / DPDK 24.03.0 initialization... 00:29:38.806 [2024-07-25 07:35:11.132848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606022 ] 00:29:38.806 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.806 [2024-07-25 07:35:11.192782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.806 [2024-07-25 07:35:11.308759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.063 07:35:11 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.063 07:35:11 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:39.063 07:35:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:39.063 07:35:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:39.063 07:35:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:39.063 07:35:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:39.629 07:35:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:39.629 07:35:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:39.886 [2024-07-25 07:35:12.162485] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:39.886 nvme0n1 00:29:39.886 07:35:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:39.886 07:35:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:39.886 07:35:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:39.886 07:35:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:39.886 07:35:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:39.886 07:35:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.144 07:35:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:40.144 07:35:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:40.144 07:35:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:40.144 07:35:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:40.144 07:35:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.144 07:35:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.144 07:35:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@25 -- # sn=47876345 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 47876345 == \4\7\8\7\6\3\4\5 ]] 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 47876345 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:40.402 07:35:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.402 Running I/O for 1 seconds... 00:29:41.772 00:29:41.772 Latency(us) 00:29:41.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.772 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:41.772 nvme0n1 : 1.02 4574.58 17.87 0.00 0.00 27738.11 6796.33 37282.70 00:29:41.772 =================================================================================================================== 00:29:41.772 Total : 4574.58 17.87 0.00 0.00 27738.11 6796.33 37282.70 00:29:41.772 0 00:29:41.772 07:35:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:41.772 07:35:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:41.772 07:35:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:41.772 07:35:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:41.772 07:35:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:41.773 07:35:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:41.773 07:35:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:41.773 07:35:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.035 07:35:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:42.035 07:35:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:42.035 07:35:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:42.035 07:35:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.035 07:35:14 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:42.035 07:35:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:42.293 [2024-07-25 07:35:14.629632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824020 (107): Transport endpoint is not connected 00:29:42.293 [2024-07-25 07:35:14.629644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:42.293 [2024-07-25 07:35:14.630631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824020 (9): Bad file descriptor 00:29:42.293 [2024-07-25 07:35:14.631630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.293 [2024-07-25 07:35:14.631650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:42.293 [2024-07-25 07:35:14.631678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.293 request: 00:29:42.293 { 00:29:42.293 "name": "nvme0", 00:29:42.293 "trtype": "tcp", 00:29:42.293 "traddr": "127.0.0.1", 00:29:42.293 "adrfam": "ipv4", 00:29:42.293 "trsvcid": "4420", 00:29:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.293 "prchk_reftag": false, 00:29:42.293 "prchk_guard": false, 00:29:42.293 "hdgst": false, 00:29:42.293 "ddgst": false, 00:29:42.293 "psk": ":spdk-test:key1", 00:29:42.293 "method": "bdev_nvme_attach_controller", 00:29:42.293 "req_id": 1 00:29:42.293 } 00:29:42.293 Got JSON-RPC error response 00:29:42.293 response: 00:29:42.293 { 00:29:42.293 "code": -5, 00:29:42.293 "message": "Input/output error" 00:29:42.293 } 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@33 -- # sn=47876345 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 47876345 00:29:42.293 1 links removed 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@33 -- # sn=878814310 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 878814310 00:29:42.293 1 links removed 00:29:42.293 07:35:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2606022 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2606022 ']' 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2606022 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2606022 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2606022' 00:29:42.293 killing process with pid 2606022 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@969 -- # kill 2606022 00:29:42.293 Received shutdown signal, test time was about 1.000000 seconds 00:29:42.293 00:29:42.293 Latency(us) 00:29:42.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.293 =================================================================================================================== 00:29:42.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.293 07:35:14 keyring_linux -- common/autotest_common.sh@974 -- # wait 2606022 00:29:42.550 07:35:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2605890 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2605890 ']' 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2605890 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2605890 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2605890' 00:29:42.551 killing process with pid 2605890 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@969 -- # kill 2605890 00:29:42.551 07:35:14 keyring_linux -- common/autotest_common.sh@974 -- # wait 2605890 00:29:43.116 00:29:43.116 real 0m5.071s 00:29:43.116 user 0m9.460s 00:29:43.116 sys 0m1.509s 00:29:43.116 07:35:15 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:43.116 07:35:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:43.116 ************************************ 00:29:43.116 END TEST keyring_linux 00:29:43.116 ************************************ 00:29:43.116 07:35:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:29:43.116 07:35:15 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:43.116 07:35:15 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:43.116 07:35:15 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:43.116 07:35:15 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:29:43.116 07:35:15 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:29:43.116 07:35:15 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:29:43.116 07:35:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.116 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:29:43.116 07:35:15 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:29:43.116 07:35:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:43.116 07:35:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:43.116 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:29:45.017 INFO: APP EXITING 00:29:45.017 INFO: killing all VMs 00:29:45.017 INFO: killing vhost app 00:29:45.017 INFO: EXIT DONE 00:29:45.951 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:45.951 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:45.951 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:45.951 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:45.951 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:45.951 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:45.951 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:45.951 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:45.951 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:45.951 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:45.951 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:45.951 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:45.951 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:45.951 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:45.951 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:45.951 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:45.951 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:47.326 Cleaning 00:29:47.326 Removing: /var/run/dpdk/spdk0/config 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:47.326 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:47.326 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:47.326 Removing: /var/run/dpdk/spdk1/config 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:47.326 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:47.326 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:47.326 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:47.326 Removing: /var/run/dpdk/spdk2/config 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:47.326 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:47.326 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:47.326 Removing: /var/run/dpdk/spdk3/config 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:47.326 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:47.326 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:47.326 Removing: /var/run/dpdk/spdk4/config 00:29:47.326 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:47.326 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:47.326 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:47.326 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:47.326 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:47.326 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:47.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:47.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:47.327 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:47.327 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:47.327 Removing: /dev/shm/bdev_svc_trace.1 00:29:47.327 Removing: /dev/shm/nvmf_trace.0 00:29:47.327 Removing: /dev/shm/spdk_tgt_trace.pid2349758 00:29:47.327 Removing: /var/run/dpdk/spdk0 00:29:47.327 Removing: /var/run/dpdk/spdk1 00:29:47.327 Removing: /var/run/dpdk/spdk2 00:29:47.327 Removing: /var/run/dpdk/spdk3 00:29:47.327 Removing: /var/run/dpdk/spdk4 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2348086 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2348855 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2349758 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2350198 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2350883 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2351025 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2351743 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2351778 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2352018 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2353442 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2354854 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2355165 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2355356 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2355565 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2355755 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2355910 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2356158 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2356368 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2356564 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2359035 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2359198 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2359410 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2359496 00:29:47.327 Removing: /var/run/dpdk/spdk_pid2359806 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2359936 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2360242 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2360370 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2360541 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2360680 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2360844 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2360982 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2361345 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2361591 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2361821 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2363903 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2366522 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2373376 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2373899 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2376424 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2376695 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2379340 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2383049 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2385228 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2392264 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2397613 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2398824 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2399491 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2409716 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2412112 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2438307 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2441606 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2445420 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2449261 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2449263 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2449923 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2450575 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2451118 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2451520 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2451637 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2451782 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2451916 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2451922 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2452574 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2453112 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2453775 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2454168 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2454179 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2454436 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2455339 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2456055 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2461884 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2487463 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2490249 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2491425 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2492737 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2492770 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2492899 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2493036 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2493471 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2494787 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2495526 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2495955 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2497705 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2498152 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2498708 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2501230 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2507259 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2509913 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2514287 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2515363 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2516428 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2519034 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2521401 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2525607 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2525611 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2528380 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2528514 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2528771 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2529035 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2529045 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2531801 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2532145 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2534793 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2536665 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2540203 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2543523 00:29:47.585 Removing: /var/run/dpdk/spdk_pid2549972 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2554835 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2554837 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2567198 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2567634 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2568127 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2568539 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2569120 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2569530 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2569943 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2570463 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2572970 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2573235 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2577019 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2577204 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2578812 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2583843 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2583886 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2587247 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2588751 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2590170 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2590915 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2592322 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2593195 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2598597 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2598989 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2599377 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2600820 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2601218 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2601615 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2604057 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2604063 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2605521 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2605890 00:29:47.586 Removing: /var/run/dpdk/spdk_pid2606022 00:29:47.586 Clean 00:29:47.845 07:35:20 -- common/autotest_common.sh@1451 -- # return 0 00:29:47.845 07:35:20 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:29:47.845 07:35:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.845 07:35:20 -- common/autotest_common.sh@10 -- # set +x 00:29:47.845 07:35:20 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:29:47.845 07:35:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.845 07:35:20 -- common/autotest_common.sh@10 -- # set +x 00:29:47.845 07:35:20 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:47.845 07:35:20 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:47.845 07:35:20 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:47.845 07:35:20 -- spdk/autotest.sh@395 -- # hash lcov 00:29:47.845 07:35:20 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:47.845 07:35:20 -- spdk/autotest.sh@397 -- # hostname 00:29:47.845 07:35:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:48.102 geninfo: WARNING: invalid characters removed from testname! 00:30:20.230 07:35:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:20.231 07:35:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:22.757 07:35:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:26.037 07:35:57 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:28.564 07:36:00 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:31.841 07:36:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:34.366 07:36:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:34.625 07:36:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.625 07:36:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:34.625 07:36:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.625 07:36:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.625 07:36:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.625 07:36:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.625 07:36:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.625 07:36:06 -- paths/export.sh@5 -- $ export PATH 00:30:34.625 07:36:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.625 07:36:06 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:34.625 07:36:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:34.625 07:36:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721885766.XXXXXX 00:30:34.625 07:36:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721885766.y9Fdc9 00:30:34.625 07:36:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:34.625 07:36:06 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:30:34.625 07:36:06 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:34.625 07:36:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:34.625 07:36:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:34.625 07:36:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:34.625 07:36:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:30:34.625 07:36:06 -- common/autotest_common.sh@10 -- $ set +x 00:30:34.625 07:36:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:34.625 07:36:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:34.625 07:36:06 -- pm/common@17 -- $ local monitor 00:30:34.625 07:36:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.625 07:36:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.625 07:36:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.625 07:36:06 -- pm/common@21 -- $ date +%s 00:30:34.625 07:36:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.625 07:36:06 -- pm/common@21 -- $ date +%s 00:30:34.625 07:36:06 -- pm/common@25 -- $ sleep 1 00:30:34.625 07:36:06 -- pm/common@21 -- $ date +%s 00:30:34.625 07:36:06 -- pm/common@21 -- $ date +%s 00:30:34.625 07:36:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885766 00:30:34.625 07:36:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885766 00:30:34.625 07:36:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885766 00:30:34.625 07:36:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721885766 00:30:34.625 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885766_collect-vmstat.pm.log 00:30:34.625 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885766_collect-cpu-load.pm.log 00:30:34.625 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885766_collect-cpu-temp.pm.log 00:30:34.625 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721885766_collect-bmc-pm.bmc.pm.log 00:30:35.560 07:36:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:35.560 07:36:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:35.560 07:36:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:35.560 07:36:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:35.560 07:36:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:35.560 07:36:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:35.560 07:36:07 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:35.560 07:36:07 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:35.560 07:36:07 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:35.560 07:36:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:35.560 07:36:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:35.560 07:36:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:35.560 07:36:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:35.560 07:36:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:35.560 07:36:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:35.560 07:36:08 -- pm/common@44 -- $ pid=2615769 00:30:35.560 07:36:08 -- pm/common@50 -- $ kill -TERM 2615769 00:30:35.560 07:36:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:35.560 07:36:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:35.560 07:36:08 -- pm/common@44 -- $ pid=2615771 00:30:35.560 07:36:08 -- pm/common@50 -- $ kill -TERM 2615771 00:30:35.560 07:36:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:35.560 07:36:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:35.560 07:36:08 -- pm/common@44 -- $ pid=2615774 00:30:35.560 07:36:08 -- pm/common@50 -- $ kill -TERM 2615774 00:30:35.560 07:36:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:35.560 07:36:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:35.560 07:36:08 -- pm/common@44 -- $ pid=2615821 00:30:35.560 07:36:08 -- pm/common@50 -- $ sudo -E kill -TERM 2615821 00:30:35.560 + [[ -n 2263989 ]] 00:30:35.560 + sudo kill 2263989 00:30:35.575 [Pipeline] } 00:30:35.595 [Pipeline] // stage 00:30:35.600 [Pipeline] } 00:30:35.620 [Pipeline] // timeout 00:30:35.626 [Pipeline] } 00:30:35.642 [Pipeline] // catchError 00:30:35.647 [Pipeline] } 00:30:35.665 [Pipeline] // wrap 00:30:35.672 [Pipeline] } 00:30:35.690 [Pipeline] // catchError 00:30:35.700 [Pipeline] stage 00:30:35.702 [Pipeline] { (Epilogue) 00:30:35.717 [Pipeline] catchError 00:30:35.719 [Pipeline] { 00:30:35.734 [Pipeline] echo 00:30:35.735 Cleanup processes 00:30:35.741 [Pipeline] sh 00:30:36.023 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:36.023 2615961 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:36.023 2616325 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:36.036 [Pipeline] sh 00:30:36.317 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:36.317 ++ grep -v 'sudo pgrep' 00:30:36.317 ++ awk '{print $1}' 00:30:36.317 + sudo kill -9 2615961 00:30:36.328 [Pipeline] sh 00:30:36.609 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:44.734 [Pipeline] sh 00:30:45.017 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:45.017 Artifacts sizes are good 00:30:45.032 [Pipeline] archiveArtifacts 00:30:45.040 Archiving artifacts 00:30:45.214 [Pipeline] sh 00:30:45.496 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:45.511 [Pipeline] cleanWs 00:30:45.522 [WS-CLEANUP] Deleting project workspace... 00:30:45.522 [WS-CLEANUP] Deferred wipeout is used... 00:30:45.528 [WS-CLEANUP] done 00:30:45.529 [Pipeline] } 00:30:45.550 [Pipeline] // catchError 00:30:45.563 [Pipeline] sh 00:30:45.843 + logger -p user.info -t JENKINS-CI 00:30:45.851 [Pipeline] } 00:30:45.868 [Pipeline] // stage 00:30:45.874 [Pipeline] } 00:30:45.890 [Pipeline] // node 00:30:45.896 [Pipeline] End of Pipeline 00:30:45.933 Finished: SUCCESS